Creative tools today strive to amplify our ability to create high-quality work. However, experiencing failure is also an important part of mastering creative skills. While experts have developed strategies for engaging in risky experiments and learning from mistakes, novices lack the experience and mindset needed to use failures as opportunities for growth. Current tools intimidate the unsure novice, as they are designed around showcasing success or critiquing finished work, rather than providing safe spaces for experimentation. To better support experiences of failure for novices, we instead propose flipping the value of failure in creativity tools from something to avoid to something to pursue actively. To do this, we develop a taxonomy of creative activities that people engage in when they aim to succeed. We then invert this taxonomy to derive a new set of creative activities where deliberate failure can provide a path towards creative confidence. Lastly, we envision possible creativity support tools as examples of the potential value of supporting activities where failure is encouraged and showcased.
{"title":"Designing Creativity Support Tools for Failure","authors":"Joy Kim, Avi Bagla, Michael S. Bernstein","doi":"10.1145/2757226.2764542","DOIUrl":"https://doi.org/10.1145/2757226.2764542","url":null,"abstract":"Creative tools today strive to amplify our ability to create high-quality work. However, experiencing failure is also an important part of mastering creative skills. While experts have developed strategies for engaging in risky experiments and learning from mistakes, novices lack the experience and mindset needed to use failures as opportunities for growth. Current tools intimidate the unsure novice, as they are designed around showcasing success or critiquing finished work, rather than providing safe spaces for experimentation. To better support experiences of failure for novices, we instead propose flipping the value of failure in creativity tools from something to avoid to something to pursue actively. To do this, we develop a taxonomy of creative activities that people engage in when they aim to succeed. We then invert this taxonomy to derive a new set of creative activities where deliberate failure can provide a path towards creative confidence. Lastly, we envision possible creativity support tools as examples of the potential value of supporting activities where failure is encouraged and showcased.","PeriodicalId":231794,"journal":{"name":"Proceedings of the 2015 ACM SIGCHI Conference on Creativity and Cognition","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115414502","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lichtsuchende is an interactive installation, built using a society of biologically inspired, cybernetic creatures who exchange light as a source of energy and a means of communication. Visitors are invited to engage with the installation using torches to influence and interact with the phototropic robots. The embodied algorithms give rise to emergent behaviours with communicative and emotional resonance, allowing a duet between the humans and the cybernetic beings.
{"title":"Lichtsuchende: A Society of Cybernetic, Phototropic Sunflowers","authors":"Dave Murray-Rust, Rocio von Jungenfeld","doi":"10.1145/2757226.2757381","DOIUrl":"https://doi.org/10.1145/2757226.2757381","url":null,"abstract":"Lichtsuchende is an interactive installation, built using a society of biologically inspired, cybernetic creatures who exchange light as a source of energy and a means of communication. Visitors are invited to engage with the installation using torches to influence and interact with the phototropic robots. The embodied algorithms give rise to emergent behaviours with communicative and emotional resonance, allowing a duet between the humans and the cybernetic beings.","PeriodicalId":231794,"journal":{"name":"Proceedings of the 2015 ACM SIGCHI Conference on Creativity and Cognition","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124248227","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This demonstration paper describes [self.], an open source art installation that embodies artificial intelligence in order to learn, react, respond and be creative in its environment. Biologically inspired models are implemented to achieve this behaviour. The robot is built using a moving head, projector, camera and microphones. No form of knowledge or grammar have been implemented in the AI, the entity learns everything via its own sensory channels, forming categories in a bottom-up fashion. The robot recognizes sounds, and is able to recognize similar sounds, link them with the corresponding faces, and use the knowledge of past experiences to form new sentences. It projects neural memories that represent an association between sound and video as experienced during interaction.
{"title":"[self.]: an Interactive Art Installation that Embodies Artificial Intelligence and Creativity: A Demonstration","authors":"A. Tidemann, Øyvind Brandtsegg","doi":"10.1145/2757226.2767691","DOIUrl":"https://doi.org/10.1145/2757226.2767691","url":null,"abstract":"This demonstration paper describes [self.], an open source art installation that embodies artificial intelligence in order to learn, react, respond and be creative in its environment. Biologically inspired models are implemented to achieve this behaviour. The robot is built using a moving head, projector, camera and microphones. No form of knowledge or grammar have been implemented in the AI, the entity learns everything via its own sensory channels, forming categories in a bottom-up fashion. The robot recognizes sounds, and is able to recognize similar sounds, link them with the corresponding faces, and use the knowledge of past experiences to form new sentences. It projects neural memories that represent an association between sound and video as experienced during interaction.","PeriodicalId":231794,"journal":{"name":"Proceedings of the 2015 ACM SIGCHI Conference on Creativity and Cognition","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129734973","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Frederica Gonçalves, Pedro F. Campos, J. Hanna, S. Ashby
Minority groups are the fastest growing demographic in the U.S. In addition, the poverty level in the U.S. is the highest it has been in the last 50 years. We argue that the community needs more research addressing this user segment, and we present a novel study about how underserved youths react when presented with different UI designs aimed at promoting creative writing. The act of creative writing per se can become the driver of change among underserved teenagers, and researchers should strive to discover novel UI designs that can effectively increase this target group's productivity, creativity and mental well-being. Using MS Word as baseline, our contribution analyzes the influence of a Zen-like tool (designed by the authors and called Haven), a nostalgic but realistic typewriting tool (Hanx Writer), and a stress-based tool that eliminates writer's block by providing consequences for procrastination (Write or Die). Our results suggest that the Zen characteristics of our tool Haven were capable of conveying a sense of calm and concentration to the users, making them feel better and also write more. The nostalgic Hanx typewriter also fared very well with regard to mental well-being and productivity, as measured by average number of words written. Contrary to our initial expectations, the stress-based UI (Write or Die) had the lowest productivity levels.
{"title":"You're the Voice: Evaluating User Interfaces for Encouraging Underserved Youths to express themselves through Creative Writing","authors":"Frederica Gonçalves, Pedro F. Campos, J. Hanna, S. Ashby","doi":"10.1145/2757226.2757236","DOIUrl":"https://doi.org/10.1145/2757226.2757236","url":null,"abstract":"Minority groups are the fastest growing demographic in the U.S. In addition, the poverty level in the U.S. is the highest it has been in the last 50 years. We argue that the community needs more research addressing this user segment, and we present a novel study about how underserved youths react when presented with different UI designs aimed at promoting creative writing. The act of creative writing per se can become the driver of change among underserved teenagers, and researchers should strive to discover novel UI designs that can effectively increase this target group's productivity, creativity and mental well-being. Using MS Word as baseline, our contribution analyzes the influence of a Zen-like tool (designed by the authors and called Haven), a nostalgic but realistic typewriting tool (Hanx Writer), and a stress-based tool that eliminates writer's block by providing consequences for procrastination (Write or Die). Our results suggest that the Zen characteristics of our tool Haven were capable of conveying a sense of calm and concentration to the users, making them feel better and also write more. The nostalgic Hanx typewriter also fared very well with regard to mental well-being and productivity, as measured by average number of words written. Contrary to our initial expectations, the stress-based UI (Write or Die) had the lowest productivity levels.","PeriodicalId":231794,"journal":{"name":"Proceedings of the 2015 ACM SIGCHI Conference on Creativity and Cognition","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130586308","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sarlacc, an audio-visual performance, features visuals live coded within the OpenGL fragment shader, that are reactive to incoming audio frequencies parsed by band, beats per minute, and Open Sound Control data. The sound component is performed using Ableton Live and analog synthesis.
{"title":"Sarlacc","authors":"Shawn Lawson, Ryan Ross Smith","doi":"10.1145/2757226.2757373","DOIUrl":"https://doi.org/10.1145/2757226.2757373","url":null,"abstract":"Sarlacc, an audio-visual performance, features visuals live coded within the OpenGL fragment shader, that are reactive to incoming audio frequencies parsed by band, beats per minute, and Open Sound Control data. The sound component is performed using Ableton Live and analog synthesis.","PeriodicalId":231794,"journal":{"name":"Proceedings of the 2015 ACM SIGCHI Conference on Creativity and Cognition","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131586278","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
'AirStorm' is a semi-improvised short 10-min piece for solo AirSticks and physical model visualisation performed by Alon Ilsar and Andrew Bluff respectively. It will be made up of a drum synth, drum samples, other selected samples and room feedback triggered and manipulated by Ilsar on this newly built interface for electronic percussionists. The piece will display some of the capabilities of the AirSticks along with Ilsar's dedication to practicing and composing for this new interface. "AirStorm" will be based around the conferences theme of "Computers, Arts and Data" through the choice and samples and ways are played. The movement data from Ilsar's Airsticks is processed in real-time by Bluff's physics based visualisation engine, Storm. Particles are pushed around a virtual 3D world in response to the movements of the AirSticks and rigid body collision adds a sense of real-world authenticity and complexity. The system responds to drums and movements of the AirSticks with a combination of different visual and physical effects. The real-time visualisations exemplify the movement and sonic complexity of Ilsar's AirSticks performance, providing a visually stimulating and highly synesthetic element to the piece.
{"title":"'AirStorm,' A New Piece for AirSticks and Storm: Gestural Audio-Visual for Electronic Percussionists","authors":"Alon Ilsar, Andrew Bluff","doi":"10.1145/2757226.2757376","DOIUrl":"https://doi.org/10.1145/2757226.2757376","url":null,"abstract":"'AirStorm' is a semi-improvised short 10-min piece for solo AirSticks and physical model visualisation performed by Alon Ilsar and Andrew Bluff respectively. It will be made up of a drum synth, drum samples, other selected samples and room feedback triggered and manipulated by Ilsar on this newly built interface for electronic percussionists. The piece will display some of the capabilities of the AirSticks along with Ilsar's dedication to practicing and composing for this new interface. \"AirStorm\" will be based around the conferences theme of \"Computers, Arts and Data\" through the choice and samples and ways are played. The movement data from Ilsar's Airsticks is processed in real-time by Bluff's physics based visualisation engine, Storm. Particles are pushed around a virtual 3D world in response to the movements of the AirSticks and rigid body collision adds a sense of real-world authenticity and complexity. The system responds to drums and movements of the AirSticks with a combination of different visual and physical effects. The real-time visualisations exemplify the movement and sonic complexity of Ilsar's AirSticks performance, providing a visually stimulating and highly synesthetic element to the piece.","PeriodicalId":231794,"journal":{"name":"Proceedings of the 2015 ACM SIGCHI Conference on Creativity and Cognition","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126650273","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
System Self Assembly (2015) is an installation that is the result of a year long auto-ethnographic study. The work explores performative concepts of the self, agency, and the redundancy of the modern medium.
{"title":"System Self Assembly: Exploring the Self in the City","authors":"A. Welsby","doi":"10.1145/2757226.2757378","DOIUrl":"https://doi.org/10.1145/2757226.2757378","url":null,"abstract":"System Self Assembly (2015) is an installation that is the result of a year long auto-ethnographic study. The work explores performative concepts of the self, agency, and the redundancy of the modern medium.","PeriodicalId":231794,"journal":{"name":"Proceedings of the 2015 ACM SIGCHI Conference on Creativity and Cognition","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127807719","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Creativity thrives when people experience positive emotions. How to design an interactive system that can effectively make use of this potential is, however, still an unanswered question. In this paper, we propose one approach to this problem that relies on hacking into the cognitive appraisal processes that form part of positive emotions. To demonstrate our approach we have conceived, made, and evaluated a novel interactive system that influences an individual's appraisals of their own idea generation processes by providing real-time and believable feedback about the originality of their ideas. The system can be used to manipulate this feedback to make the user's ideas appear more or less original. This has enabled us to test experimentally the hypothesis that providing more positive feedback, rather than neutral, or more negative feedback than the user is expecting, causes more positive emotion, which in turn causes more creativity during idea generation. The findings demonstrate that an interactive system can be designed to use the function of cognitive appraisal processes in positive emotion to help people to get more out of their own creative capabilities.
{"title":"Emotion and Creativity: Hacking into Cognitive Appraisal Processes to Augment Creative Ideation","authors":"A. D. Rooij, P. Corr, Sara Jones","doi":"10.1145/2757226.2757227","DOIUrl":"https://doi.org/10.1145/2757226.2757227","url":null,"abstract":"Creativity thrives when people experience positive emotions. How to design an interactive system that can effectively make use of this potential is, however, still an unanswered question. In this paper, we propose one approach to this problem that relies on hacking into the cognitive appraisal processes that form part of positive emotions. To demonstrate our approach we have conceived, made, and evaluated a novel interactive system that influences an individual's appraisals of their own idea generation processes by providing real-time and believable feedback about the originality of their ideas. The system can be used to manipulate this feedback to make the user's ideas appear more or less original. This has enabled us to test experimentally the hypothesis that providing more positive feedback, rather than neutral, or more negative feedback than the user is expecting, causes more positive emotion, which in turn causes more creativity during idea generation. The findings demonstrate that an interactive system can be designed to use the function of cognitive appraisal processes in positive emotion to help people to get more out of their own creative capabilities.","PeriodicalId":231794,"journal":{"name":"Proceedings of the 2015 ACM SIGCHI Conference on Creativity and Cognition","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127346133","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The term design fiction was first used in 2005 by Bruce Sterling [18:30] and in 2009 Julian Bleecker built on the idea by combining it with various other characterisations [cf. 1,2,10] and catalysed a step change in design fiction discourse. Since then design fiction has gained significant traction across academic contexts; at symposia and conference events; and through its practice within commercial design studios and industry. Despite becoming a popular way of framing speculative design, the characterisation of design fiction as research approach still remains "up for grabs" [19:22] as it is -- enticing and provocative, yet [...] remains elusive" [7:1]. In 2013 Bleecker remarked in terms of his studios own practice "I don't think we've figured it out" and that studying it, understanding it and trying to devise some of the principles - of what we're calling design fiction - is what we're trying to do? [1]. Adopting a research through design approach [5,6], this doctoral research intends to shed light on the questions raised by Bleecker by researching design fiction, with design fiction.
{"title":"Researching Design Fiction With Design Fiction","authors":"Joseph Lindley","doi":"10.1145/2757226.2764763","DOIUrl":"https://doi.org/10.1145/2757226.2764763","url":null,"abstract":"The term design fiction was first used in 2005 by Bruce Sterling [18:30] and in 2009 Julian Bleecker built on the idea by combining it with various other characterisations [cf. 1,2,10] and catalysed a step change in design fiction discourse. Since then design fiction has gained significant traction across academic contexts; at symposia and conference events; and through its practice within commercial design studios and industry. Despite becoming a popular way of framing speculative design, the characterisation of design fiction as research approach still remains \"up for grabs\" [19:22] as it is -- enticing and provocative, yet [...] remains elusive\" [7:1]. In 2013 Bleecker remarked in terms of his studios own practice \"I don't think we've figured it out\" and that studying it, understanding it and trying to devise some of the principles - of what we're calling design fiction - is what we're trying to do? [1]. Adopting a research through design approach [5,6], this doctoral research intends to shed light on the questions raised by Bleecker by researching design fiction, with design fiction.","PeriodicalId":231794,"journal":{"name":"Proceedings of the 2015 ACM SIGCHI Conference on Creativity and Cognition","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132843276","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper introduces a novel approach to developing co-creative agents that collaborate in real time creative contexts, such as art and pretend play. Our approach builds upon recent work in computational creativity called interactive machine learning (IML). In IML, agents learn through demonstration, interaction, and real time feedback from a human user (as opposed to offline training). To apply IML to open-ended creative collaboration, we developed an enactive model of creativity (EMC) based upon the cognitive science theories of enaction. This paper introduces our enactive approach to building co-creative agents within the broader field of interactive machine learning by describing the theory, design, and initial prototypes of two co-creative agents.
{"title":"An Enactive Approach to Facilitate Interactive Machine Learning for Co-Creative Agents","authors":"N. Davis","doi":"10.1145/2757226.2764773","DOIUrl":"https://doi.org/10.1145/2757226.2764773","url":null,"abstract":"This paper introduces a novel approach to developing co-creative agents that collaborate in real time creative contexts, such as art and pretend play. Our approach builds upon recent work in computational creativity called interactive machine learning (IML). In IML, agents learn through demonstration, interaction, and real time feedback from a human user (as opposed to offline training). To apply IML to open-ended creative collaboration, we developed an enactive model of creativity (EMC) based upon the cognitive science theories of enaction. This paper introduces our enactive approach to building co-creative agents within the broader field of interactive machine learning by describing the theory, design, and initial prototypes of two co-creative agents.","PeriodicalId":231794,"journal":{"name":"Proceedings of the 2015 ACM SIGCHI Conference on Creativity and Cognition","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128088158","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}