{"title":"[self.]: an Interactive Art Installation that Embodies Artificial Intelligence and Creativity","authors":"A. Tidemann, Øyvind Brandtsegg","doi":"10.1145/2757226.2764549","DOIUrl":null,"url":null,"abstract":"This paper describes [self.], an open source art installation that embodies artificial intelligence (AI) in order to learn, react, respond and be creative in its environment. Biologically inspired models are implemented to achieve this behaviour. The robot is built using a moving head, projector, camera and microphones. No form of knowledge or grammar have been implemented in the AI, the system starts in a ``tabula rasa' state and learns everything via its own sensory channels, forming categories in a bottom-up fashion. The robot recognizes sounds, and is able to recognize similar sounds, link them with the corresponding faces, and use the knowledge of past experiences to form new sentences. It projects neural memories that represent an association between sound and video as experienced during interaction.","PeriodicalId":231794,"journal":{"name":"Proceedings of the 2015 ACM SIGCHI Conference on Creativity and Cognition","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2015-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2015 ACM SIGCHI Conference on Creativity and Cognition","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2757226.2764549","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4
Abstract
This paper describes [self.], an open source art installation that embodies artificial intelligence (AI) in order to learn, react, respond and be creative in its environment. Biologically inspired models are implemented to achieve this behaviour. The robot is built using a moving head, projector, camera and microphones. No form of knowledge or grammar have been implemented in the AI, the system starts in a ``tabula rasa' state and learns everything via its own sensory channels, forming categories in a bottom-up fashion. The robot recognizes sounds, and is able to recognize similar sounds, link them with the corresponding faces, and use the knowledge of past experiences to form new sentences. It projects neural memories that represent an association between sound and video as experienced during interaction.