Regulators, researchers develop AI safeguards

None Laurel Oldach
{"title":"Regulators, researchers develop AI safeguards","authors":"None Laurel Oldach","doi":"10.1021/cen-10137-scicon1","DOIUrl":null,"url":null,"abstract":"Artificial intelligence and machine learning tools are already being used to power voice assistants and self-driving cars, determine what users see on the internet, and guide drug design and chemical syntheses. But there are concerns about their ability to push disinformation, compromise cybersecurity, and engineer harmful biological materials. Governments around the world hope to mitigate those risks without quashing progress in the problems that AI seems poised to solve. A recent executive order by US president Joe Biden announced measures to make AI systems safer, such as requiring their developers to search for ways that bad actors could exploit the tools. Shortly after the order’s announcement , government and corporation representatives gathered in the UK for a summit on the risks of AI; 28 countries signed a declaration that supports continuing development of the technology but calls for more research into its potential risks. Many parts of the chemical enterprise","PeriodicalId":9517,"journal":{"name":"C&EN Global Enterprise","volume":"54 12","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"C&EN Global Enterprise","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1021/cen-10137-scicon1","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Artificial intelligence and machine learning tools are already being used to power voice assistants and self-driving cars, determine what users see on the internet, and guide drug design and chemical syntheses. But there are concerns about their ability to push disinformation, compromise cybersecurity, and engineer harmful biological materials. Governments around the world hope to mitigate those risks without quashing progress in the problems that AI seems poised to solve. A recent executive order by US president Joe Biden announced measures to make AI systems safer, such as requiring their developers to search for ways that bad actors could exploit the tools. Shortly after the order’s announcement , government and corporation representatives gathered in the UK for a summit on the risks of AI; 28 countries signed a declaration that supports continuing development of the technology but calls for more research into its potential risks. Many parts of the chemical enterprise
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
监管机构、研究人员开发人工智能防护措施
人工智能和机器学习工具已被用于驱动语音助手和自动驾驶汽车,确定用户在互联网上看到的内容,以及指导药物设计和化学合成。但人们担心它们有能力传播虚假信息、破坏网络安全,以及制造有害的生物材料。世界各国政府都希望在不影响人工智能似乎准备解决的问题取得进展的情况下,减轻这些风险。美国总统乔·拜登(Joe Biden)最近发布了一项行政命令,宣布了使人工智能系统更安全的措施,比如要求它们的开发人员寻找恶意行为者利用这些工具的方式。在该命令宣布后不久,政府和企业代表聚集在英国,召开了一场关于人工智能风险的峰会;28个国家签署了一份声明,支持继续开发这项技术,但呼吁对其潜在风险进行更多研究。化工企业的许多部分
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
FDA seeks to ban brominated vegetable oil Almac details expansion plans 2023 EPA Green Chemistry Challenge Awards recipients named EPA to rebuild endocrine disruptor program Pyrum eyes tire recycling in Czechia
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1