Genius Makers: The Mavericks Who Brought AI to Google, Facebook, and the World

C. Metz
{"title":"Genius Makers: The Mavericks Who Brought AI to Google, Facebook, and the World","authors":"C. Metz","doi":"10.56315/pscf9-22metz","DOIUrl":null,"url":null,"abstract":"GENIUS MAKERS: The Mavericks Who Brought AI to Google, Facebook, and the World by Cade Metz. New York: Dutton, 2021. 371 pages including notes, references, and index. Hardcover; $28.00. ISBN: 9781524742676. *As Cade Metz says in the acknowledgments section, this is a book \"not about the technology [of AI] but about the people building it ... I was lucky that the people I wanted to write about were so interesting and so eloquent and so completely different from one [an]other\" (p. 314). *And, that's what this book is about. It is about people such as Geoff Hinton, founder of DNNresearch, who, once he reached his late fifties, never sat down because of his bad back. It is about others who came after him, including Yann LeCun, Ian Goodfellow, Andrew Ng, Yoshua Bengio, Jeff Dean, Jürgen Schmidhuber, Li Deng, Ilya Sutskever, Alex Krizhevsky, Demis Hassabis, and Shane Legg, each of whom had their strengths, weaknesses, and quirks. *The book also follows the development of interest in AI by companies like Google, Microsoft, Facebook, DeepMind, and OpenAI. DeepMind is perhaps the least known of these. It is the company, led by Demis Hassabis, that first made headlines by training a neural network to play old Atari games such as Space Invaders, Pong, and Breakout, using a new technique called reinforcement learning. It attracted a lot of attention from investors such as Elon Musk, Peter Thiel, and Google's Larry Page. *While most companies were interested in the application of AI to improve their products, DeepMind's goal was AGI, \"Artificial General Intelligence\"--technology that could do anything the human brain could do, only better. DeepMind was also the first company to take a stand on two issues: if the company was bought out (which it was, by Google), (1) their technology would not be used for military purposes, and (2) an independent ethics board would oversee the use of DeepMind's AGI technology, whenever that would arrive (p. 116). *Part One of the book, \"A New Kind of Machine,\" follows the early players in the field as they navigate the early \"AI winters,\" experiment with various new algorithms and technologies, and have breakthroughs and disappointments. From the beginning, there were clashes between personalities, collaboration and competition, and promises kept and broken. *Part Two of the book, titled \"Who Owns Intelligence?,\" explores how many of the people named above were wooed by the different companies, and moved back and forth between them, sometimes working together and sometimes competing with each other. The companies understood the power of neural networks and deep learning, but they could not develop the technologies without the direction of the leading researchers, who were in limited supply. To woo the best researchers, the companies competed to develop exciting and show-stopping technology, such as self-driving cars and an AI to play (and beat) the best in Chess and Go. *In Part Three, \"Turmoil,\" the author explores how the players began to realize the shortcomings and potentially dangerous effects of the AI systems. AI systems were becoming more and more capable in a variety of tasks. \"Deep fakes\" of celebrities and the auto-generation of fake news (often on Facebook) led many to question the direction AI was going. Ian Goodfellow said, \"There's a lot of other areas where AI is opening doors that we've never opened before. And we don't really know what's on the other side\" (p. 211). One surprising figure taking a stand on the side of caution was Elon Musk, giving repeated warnings of the possible rise of superintelligent actors. Further, it was discovered that the Chinese government was already using AI to do facial recognition and track its citizens as they moved about. *Other concerns dampened the community: it was discovered that small and unexpected flaws in training could have significant effects on the ability of an AI system to do its job. For example, \"by slapping a few Post-it notes on a stop sign, [researchers] could fool a car into thinking it wasn't there\" (p. 212). *Additionally, the biases in training data were being exposed, leading some to believe that AI systems would not equally benefit minority groups, and could even discriminate against them. Furthermore, Google was being approached by the US government to assist in the development of programs which could be used in warfare. Finally, Facebook was struggling to contain fake news and finding that even AIs could not effectively be used to combat it. *In the final sections of the book, the author explores the AI researchers' attitudes toward the future and the big questions. Will AI systems be able to eventually take over all work, even physical labor? Can the AI juggernaut be controlled and directed? Will AGI be fully realized? *This last question is explored in the chapter titled \"Religion.\" \"Belief in AGI required a leap of faith. But it drove some researchers forward in a very real way. It was something like a religion,\" said roboticist Sergey Levine (p. 290). The question of the feasibility of AGI continues to generate much debate, with one camp claiming that it is inevitable, while the other camp insisting that AI systems will excel only in limited tasks and environments. *As a Christian, I found the debates about the proper role of AI to be intriguing. Is the development of AGI inevitable? Should we as Christians petition companies and governments to have debates on the pursuit of AGI? Should we enact laws to limit or prohibit the use of AI in warfare? Should independent evaluators be required to review AI systems regarding discrimination? Should Christians participate in the further development of AGI? *Learning the histories and attitudes of the leading individuals in the development of AI also intrigued me. Many of the individuals seem to have very little concern for the potentially negative impact of their work. Their only motivation seems to be fame and fortune. It makes me wonder if the field of computer science should require all its practitioners to take ethics training like professional engineers are required to do. This book certainly confirms the importance of ethics in the field of computer science and the need for its practitioners to be people of virtue. *In summary, this was a different kind of book from many others in the field of technology. It was fascinating that so much of what I was reading about had happened in just the last ten years. Hearing the anecdotes of back-office meetings, public outcries, and false claims was intriguing. If you, like me, wonder how we got to where we are today in the area of AI, this is the book for you. *Reviewed by Victor T. Norman, Assistant Professor of Computer Science, Calvin University, Grand Rapids, MI 49546.","PeriodicalId":53927,"journal":{"name":"Perspectives on Science and Christian Faith","volume":null,"pages":null},"PeriodicalIF":0.2000,"publicationDate":"2022-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Perspectives on Science and Christian Faith","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.56315/pscf9-22metz","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"0","JCRName":"RELIGION","Score":null,"Total":0}
引用次数: 6

Abstract

GENIUS MAKERS: The Mavericks Who Brought AI to Google, Facebook, and the World by Cade Metz. New York: Dutton, 2021. 371 pages including notes, references, and index. Hardcover; $28.00. ISBN: 9781524742676. *As Cade Metz says in the acknowledgments section, this is a book "not about the technology [of AI] but about the people building it ... I was lucky that the people I wanted to write about were so interesting and so eloquent and so completely different from one [an]other" (p. 314). *And, that's what this book is about. It is about people such as Geoff Hinton, founder of DNNresearch, who, once he reached his late fifties, never sat down because of his bad back. It is about others who came after him, including Yann LeCun, Ian Goodfellow, Andrew Ng, Yoshua Bengio, Jeff Dean, Jürgen Schmidhuber, Li Deng, Ilya Sutskever, Alex Krizhevsky, Demis Hassabis, and Shane Legg, each of whom had their strengths, weaknesses, and quirks. *The book also follows the development of interest in AI by companies like Google, Microsoft, Facebook, DeepMind, and OpenAI. DeepMind is perhaps the least known of these. It is the company, led by Demis Hassabis, that first made headlines by training a neural network to play old Atari games such as Space Invaders, Pong, and Breakout, using a new technique called reinforcement learning. It attracted a lot of attention from investors such as Elon Musk, Peter Thiel, and Google's Larry Page. *While most companies were interested in the application of AI to improve their products, DeepMind's goal was AGI, "Artificial General Intelligence"--technology that could do anything the human brain could do, only better. DeepMind was also the first company to take a stand on two issues: if the company was bought out (which it was, by Google), (1) their technology would not be used for military purposes, and (2) an independent ethics board would oversee the use of DeepMind's AGI technology, whenever that would arrive (p. 116). *Part One of the book, "A New Kind of Machine," follows the early players in the field as they navigate the early "AI winters," experiment with various new algorithms and technologies, and have breakthroughs and disappointments. From the beginning, there were clashes between personalities, collaboration and competition, and promises kept and broken. *Part Two of the book, titled "Who Owns Intelligence?," explores how many of the people named above were wooed by the different companies, and moved back and forth between them, sometimes working together and sometimes competing with each other. The companies understood the power of neural networks and deep learning, but they could not develop the technologies without the direction of the leading researchers, who were in limited supply. To woo the best researchers, the companies competed to develop exciting and show-stopping technology, such as self-driving cars and an AI to play (and beat) the best in Chess and Go. *In Part Three, "Turmoil," the author explores how the players began to realize the shortcomings and potentially dangerous effects of the AI systems. AI systems were becoming more and more capable in a variety of tasks. "Deep fakes" of celebrities and the auto-generation of fake news (often on Facebook) led many to question the direction AI was going. Ian Goodfellow said, "There's a lot of other areas where AI is opening doors that we've never opened before. And we don't really know what's on the other side" (p. 211). One surprising figure taking a stand on the side of caution was Elon Musk, giving repeated warnings of the possible rise of superintelligent actors. Further, it was discovered that the Chinese government was already using AI to do facial recognition and track its citizens as they moved about. *Other concerns dampened the community: it was discovered that small and unexpected flaws in training could have significant effects on the ability of an AI system to do its job. For example, "by slapping a few Post-it notes on a stop sign, [researchers] could fool a car into thinking it wasn't there" (p. 212). *Additionally, the biases in training data were being exposed, leading some to believe that AI systems would not equally benefit minority groups, and could even discriminate against them. Furthermore, Google was being approached by the US government to assist in the development of programs which could be used in warfare. Finally, Facebook was struggling to contain fake news and finding that even AIs could not effectively be used to combat it. *In the final sections of the book, the author explores the AI researchers' attitudes toward the future and the big questions. Will AI systems be able to eventually take over all work, even physical labor? Can the AI juggernaut be controlled and directed? Will AGI be fully realized? *This last question is explored in the chapter titled "Religion." "Belief in AGI required a leap of faith. But it drove some researchers forward in a very real way. It was something like a religion," said roboticist Sergey Levine (p. 290). The question of the feasibility of AGI continues to generate much debate, with one camp claiming that it is inevitable, while the other camp insisting that AI systems will excel only in limited tasks and environments. *As a Christian, I found the debates about the proper role of AI to be intriguing. Is the development of AGI inevitable? Should we as Christians petition companies and governments to have debates on the pursuit of AGI? Should we enact laws to limit or prohibit the use of AI in warfare? Should independent evaluators be required to review AI systems regarding discrimination? Should Christians participate in the further development of AGI? *Learning the histories and attitudes of the leading individuals in the development of AI also intrigued me. Many of the individuals seem to have very little concern for the potentially negative impact of their work. Their only motivation seems to be fame and fortune. It makes me wonder if the field of computer science should require all its practitioners to take ethics training like professional engineers are required to do. This book certainly confirms the importance of ethics in the field of computer science and the need for its practitioners to be people of virtue. *In summary, this was a different kind of book from many others in the field of technology. It was fascinating that so much of what I was reading about had happened in just the last ten years. Hearing the anecdotes of back-office meetings, public outcries, and false claims was intriguing. If you, like me, wonder how we got to where we are today in the area of AI, this is the book for you. *Reviewed by Victor T. Norman, Assistant Professor of Computer Science, Calvin University, Grand Rapids, MI 49546.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
天才制造者:将人工智能带入b谷歌、Facebook和世界的特立独行者
《天才制造者:将人工智能带入b谷歌、Facebook和世界的独行侠》,凯德·梅斯著。纽约:达顿出版社,2021年。371页,包括注释、参考文献和索引。精装书;28.00美元。ISBN: 9781524742676。*正如凯德·梅茨(Cade Metz)在致谢部分所说,这是一本“不是关于(人工智能)技术,而是关于构建它的人……我很幸运,我想写的那些人都很有趣,口才很好,彼此完全不同”(第314页)。这就是这本书的内容。它讲的是像DNNresearch创始人杰夫•辛顿(Geoff Hinton)这样的人。由于背部不好,他快60岁的时候再也没有坐过。这是关于他之后的其他人,包括Yann LeCun, Ian Goodfellow, Andrew Ng, Yoshua Bengio, Jeff Dean, j<s:1> rgen Schmidhuber, Li Deng, Ilya Sutskever, Alex Krizhevsky, Demis Hassabis和Shane Legg,他们每个人都有自己的优点,缺点和怪癖。*这本书还跟踪了b谷歌、微软、Facebook、DeepMind和OpenAI等公司对人工智能的兴趣发展。DeepMind可能是其中最不为人所知的。正是这家由Demis Hassabis领导的公司,通过使用一种名为强化学习的新技术,训练神经网络来玩Atari的老游戏,如《太空入侵者》、《乒乓》和《Breakout》,首次登上了头条。它吸引了很多投资者的关注,比如埃隆·马斯克、彼得·蒂尔和bb0的拉里·佩奇。*虽然大多数公司都对应用人工智能来改进产品感兴趣,但DeepMind的目标是通用人工智能(AGI)——一种能做人类大脑能做的任何事情的技术,而且只会做得更好。DeepMind也是第一家在两个问题上表明立场的公司:如果公司被b谷歌收购,(1)他们的技术不会被用于军事目的,(2)一个独立的道德委员会将监督DeepMind的AGI技术的使用,无论它何时到来(第116页)。*本书第一部分《一种新机器》(A New Kind Machine)讲述了该领域的早期参与者如何度过早期的“人工智能冬天”,尝试各种新算法和技术,取得突破,也有失望。从一开始,就存在着个性的冲突、合作与竞争、承诺与违背。*这本书的第二部分,题为“谁拥有智力?”,探讨了上述提到的人中有多少人被不同的公司所吸引,并在它们之间来回移动,有时合作,有时相互竞争。这些公司了解神经网络和深度学习的力量,但如果没有顶尖研究人员的指导,他们就无法开发这些技术,而这些研究人员的数量有限。为了吸引最优秀的研究人员,这些公司竞相开发令人兴奋和引人注目的技术,比如自动驾驶汽车和在国际象棋和围棋中下棋(并击败)最好的人工智能。*在第三部分“混乱”中,作者探讨了玩家如何开始意识到AI系统的缺点和潜在的危险影响。人工智能系统在各种任务中变得越来越有能力。名人的“深度造假”和假新闻的自动生成(通常在Facebook上)让许多人质疑人工智能的发展方向。Ian Goodfellow表示:“人工智能在许多其他领域打开了我们从未打开过的大门。而我们并不真正知道另一面是什么”(第211页)。埃隆·马斯克(Elon Musk)是一个持谨慎态度的令人惊讶的人物,他多次警告说,超级智能演员可能会崛起。此外,据发现,中国政府已经在使用人工智能进行面部识别,并在公民移动时跟踪他们。*其他的担忧使社区受挫:人们发现,训练中的小而意想不到的缺陷可能会对人工智能系统的工作能力产生重大影响。例如,“通过在停车标志上贴上几张便利贴,[研究人员]可以使汽车误以为它不在那里”(第212页)。*此外,训练数据中的偏见被暴露出来,导致一些人认为人工智能系统不会平等地惠及少数群体,甚至可能歧视他们。此外,美国政府正在接洽b谷歌,以协助开发可用于战争的项目。最后,Facebook正在努力遏制假新闻,并发现即使是人工智能也无法有效地用于打击假新闻。*在书的最后部分,作者探讨了人工智能研究人员对未来和重大问题的态度。人工智能系统最终能够接管所有的工作,甚至是体力劳动吗?人工智能是否可以被控制和指挥?AGI会完全实现吗?*最后一个问题在“宗教”一章中探讨。“相信AGI需要一个信念的飞跃。但它以一种非常真实的方式推动了一些研究人员向前发展。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
57.10%
发文量
0
期刊最新文献
The Digital Public Square: Christian Ethics in a Technological Society Real Structures and Divine Action Externalism: A Solution to Benacerraf's Problem Virtue and Artificial Intelligence Did the New Testament Authors Believe the Earth Is Flat? Modifying Our Genes: Theology, Science and “Playing God”
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1