Pub Date : 2021-02-18DOI: 10.1093/oso/9780190080365.003.0007
Simone Natale
AI voice assistants are based on software that enters into dialogue with users through speech in order to provide replies to the users’ queries or execute tasks such as sending emails, searching on the web, or turning on a lamp. Every assistant is represented as an individual character or persona (e.g., “Siri” or “Alexa”) that despite being nonhuman can be imagined and interacted with as such. Focusing on the cases of Alexa, Siri, and Google Assistant, this chapter argues that voice assistants activate an ambivalent relationship with users, giving them the illusion of control in their interactions with the assistants while at the same time withdrawing them from actual control over the computing systems that lie behind these interfaces. The chapter illustrates how this is made possible at the interface level by mechanisms of projection that expect users to contribute to the construction of the assistant as a persona, and how this construction ultimately conceals the networked computing systems administered by the powerful corporations who developed these tools.
{"title":"To Believe in Siri","authors":"Simone Natale","doi":"10.1093/oso/9780190080365.003.0007","DOIUrl":"https://doi.org/10.1093/oso/9780190080365.003.0007","url":null,"abstract":"AI voice assistants are based on software that enters into dialogue with users through speech in order to provide replies to the users’ queries or execute tasks such as sending emails, searching on the web, or turning on a lamp. Every assistant is represented as an individual character or persona (e.g., “Siri” or “Alexa”) that despite being nonhuman can be imagined and interacted with as such. Focusing on the cases of Alexa, Siri, and Google Assistant, this chapter argues that voice assistants activate an ambivalent relationship with users, giving them the illusion of control in their interactions with the assistants while at the same time withdrawing them from actual control over the computing systems that lie behind these interfaces. The chapter illustrates how this is made possible at the interface level by mechanisms of projection that expect users to contribute to the construction of the assistant as a persona, and how this construction ultimately conceals the networked computing systems administered by the powerful corporations who developed these tools.","PeriodicalId":226095,"journal":{"name":"Deceitful Media","volume":"2006 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125561263","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-02-18DOI: 10.1093/OSO/9780190080365.003.0002
Simone Natale
The relationship between AI and deception was initially explored by Alan Turing, who famously proposed in 1950 a practical test addressing the question “Can machines think?” This chapter argues that Turing’s proposal of the Imitation Game, now more commonly called the Turing test, located the prospects of AI not just in improvements of hardware and software but also in a more complex scenario emerging from the interaction between humans and computers. The Turing test, by placing humans at the center of its design as judges and as conversation agents alongside computers, created a space to imagine and experiment with AI technologies in terms of their credibility to human users. This entailed the discovery that AI was to be achieved not only through the development of more complex and functional computing technologies but also through the use of strategies and techniques exploiting humans’ liability to illusion and deception.
{"title":"The Turing Test","authors":"Simone Natale","doi":"10.1093/OSO/9780190080365.003.0002","DOIUrl":"https://doi.org/10.1093/OSO/9780190080365.003.0002","url":null,"abstract":"The relationship between AI and deception was initially explored by Alan Turing, who famously proposed in 1950 a practical test addressing the question “Can machines think?” This chapter argues that Turing’s proposal of the Imitation Game, now more commonly called the Turing test, located the prospects of AI not just in improvements of hardware and software but also in a more complex scenario emerging from the interaction between humans and computers. The Turing test, by placing humans at the center of its design as judges and as conversation agents alongside computers, created a space to imagine and experiment with AI technologies in terms of their credibility to human users. This entailed the discovery that AI was to be achieved not only through the development of more complex and functional computing technologies but also through the use of strategies and techniques exploiting humans’ liability to illusion and deception.","PeriodicalId":226095,"journal":{"name":"Deceitful Media","volume":"199 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133653294","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}