Prior studies have addressed many negative aspects of mobile distractions in group activities. In this paper, we present Lock n' LoL. This is an application designed to help users focus on their group activities by allowing group members to limit their smartphone usage together. In particular, it provides synchronous social awareness of each other's limiting behavior. This synchronous social awareness can arouse feelings of connectedness among group members and can mitigate social vulnerability due to smartphone distraction (e.g., social exclusion) that often results in poor social experiences. After following an iterative prototyping process, we conducted a large-scale user study (n = 976) via real field deployment. The study results revealed how the participants used Lock n' LoL in their diverse contexts and how Lock n' LoL helped them to mitigate smartphone distractions.
{"title":"Lock n' LoL: Group-based Limiting Assistance App to Mitigate Smartphone Distractions in Group Activities","authors":"Minsam Ko, Seungwoo Choi, K. Yatani, Uichin Lee","doi":"10.1145/2858036.2858568","DOIUrl":"https://doi.org/10.1145/2858036.2858568","url":null,"abstract":"Prior studies have addressed many negative aspects of mobile distractions in group activities. In this paper, we present Lock n' LoL. This is an application designed to help users focus on their group activities by allowing group members to limit their smartphone usage together. In particular, it provides synchronous social awareness of each other's limiting behavior. This synchronous social awareness can arouse feelings of connectedness among group members and can mitigate social vulnerability due to smartphone distraction (e.g., social exclusion) that often results in poor social experiences. After following an iterative prototyping process, we conducted a large-scale user study (n = 976) via real field deployment. The study results revealed how the participants used Lock n' LoL in their diverse contexts and how Lock n' LoL helped them to mitigate smartphone distractions.","PeriodicalId":169608,"journal":{"name":"Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125968208","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper introduces a new framework to guide the design of interactive music learning systems, focusing on the piano. Taking a Reflective approach, we identify the implicit assumption behind most existing systems-that learning music is learning to play correctly according to the score-and offer an alternative approach. We argue that systems should help cultivate higher levels of musicianship beyond correctness alone for students of all levels. Drawing from both pedagogical literature and the personal experience of learning to play the piano, we identify three skills central to musicianship-listening, embodied understanding, and creative imagination-which we generalize to the Inspect, Embody, Invent framework. To demonstrate how this framework translates to design, we discuss two existing interfaces from our own research-MirrorFugue and Andante-both built on a digitally controlled player piano augmented by in-situ projection. Finally, we discuss the framework's relevance toward bigger themes of embodied interactions and learning beyond the domain of music.
{"title":"Inspect, Embody, Invent: A Design Framework for Music Learning and Beyond","authors":"Xiao Xiao, H. Ishii","doi":"10.1145/2858036.2858577","DOIUrl":"https://doi.org/10.1145/2858036.2858577","url":null,"abstract":"This paper introduces a new framework to guide the design of interactive music learning systems, focusing on the piano. Taking a Reflective approach, we identify the implicit assumption behind most existing systems-that learning music is learning to play correctly according to the score-and offer an alternative approach. We argue that systems should help cultivate higher levels of musicianship beyond correctness alone for students of all levels. Drawing from both pedagogical literature and the personal experience of learning to play the piano, we identify three skills central to musicianship-listening, embodied understanding, and creative imagination-which we generalize to the Inspect, Embody, Invent framework. To demonstrate how this framework translates to design, we discuss two existing interfaces from our own research-MirrorFugue and Andante-both built on a digitally controlled player piano augmented by in-situ projection. Finally, we discuss the framework's relevance toward bigger themes of embodied interactions and learning beyond the domain of music.","PeriodicalId":169608,"journal":{"name":"Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems","volume":"91 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124754717","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper explores how interactive technology can help overcome barriers to active patient participation in audiological consultations involving hearing aid tuning. We describe the design and evaluation of a prototype sound simulator intended to trigger reflection in patients regarding their hearing experiences, and help guide the tuning process. The prototype was tested in twelve consultations. Our findings suggest that it helped facilitate patient participation during the tuning process by: (1) encouraging an iterative, patient-driven approach; (2) stimulating context-specific feedback and follow-up questions; (3) helping patients make sense of medical information and treatment actions; (4) offering patient control over the process pace and what situations to optimize for; and (5) promoting reflections on daily hearing aid use. Post-consultation interviews revealed that the prototype was perceived useful in several ways. Our results highlight the benefit of flexible designs that can be appropriated to fit the spontaneous needs of patients and audiologists
{"title":"Breaking the Sound Barrier: Designing for Patient Participation in Audiological Consultations","authors":"Y. Dahl, G. Hanssen","doi":"10.1145/2858036.2858126","DOIUrl":"https://doi.org/10.1145/2858036.2858126","url":null,"abstract":"This paper explores how interactive technology can help overcome barriers to active patient participation in audiological consultations involving hearing aid tuning. We describe the design and evaluation of a prototype sound simulator intended to trigger reflection in patients regarding their hearing experiences, and help guide the tuning process. The prototype was tested in twelve consultations. Our findings suggest that it helped facilitate patient participation during the tuning process by: (1) encouraging an iterative, patient-driven approach; (2) stimulating context-specific feedback and follow-up questions; (3) helping patients make sense of medical information and treatment actions; (4) offering patient control over the process pace and what situations to optimize for; and (5) promoting reflections on daily hearing aid use. Post-consultation interviews revealed that the prototype was perceived useful in several ways. Our results highlight the benefit of flexible designs that can be appropriated to fit the spontaneous needs of patients and audiologists","PeriodicalId":169608,"journal":{"name":"Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129470622","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Computer-aided design of flat patterns allows designers to prototype foldable 3D objects made of heterogeneous sheets of material. We found origami designs are often characterized by pre-synthesized patterns and automated algorithms. Furthermore, augmenting articulated features to a desired model requires time-consuming synthesis of interconnected joints. This paper presents CardBoardiZer, a rapid cardboard based prototyping platform that allows everyday sculptural 3D models to be easily customized, articulated and folded. We develop a building platform to allow the designer to 1) import a desired 3D shape, 2) customize articulated partitions into planar or volumetric foldable patterns, and 3) define rotational movements between partitions. The system unfolds the model into 2D crease-cut-slot patterns ready for die-cutting and folding. In this paper, we developed interactive algorithms and validated the usability of CardBoardiZer using various 3D models. Furthermore, comparisons between CardBoardiZer and methods of Autodesk® 123D Make, demonstrated significantly shorter time-to-prototype and ease of fabrication.
{"title":"CardBoardiZer: Creatively Customize, Articulate and Fold 3D Mesh Models","authors":"Yunbo Zhang, Wei Gao, Luis Paredes, K. Ramani","doi":"10.1145/2858036.2858362","DOIUrl":"https://doi.org/10.1145/2858036.2858362","url":null,"abstract":"Computer-aided design of flat patterns allows designers to prototype foldable 3D objects made of heterogeneous sheets of material. We found origami designs are often characterized by pre-synthesized patterns and automated algorithms. Furthermore, augmenting articulated features to a desired model requires time-consuming synthesis of interconnected joints. This paper presents CardBoardiZer, a rapid cardboard based prototyping platform that allows everyday sculptural 3D models to be easily customized, articulated and folded. We develop a building platform to allow the designer to 1) import a desired 3D shape, 2) customize articulated partitions into planar or volumetric foldable patterns, and 3) define rotational movements between partitions. The system unfolds the model into 2D crease-cut-slot patterns ready for die-cutting and folding. In this paper, we developed interactive algorithms and validated the usability of CardBoardiZer using various 3D models. Furthermore, comparisons between CardBoardiZer and methods of Autodesk® 123D Make, demonstrated significantly shorter time-to-prototype and ease of fabrication.","PeriodicalId":169608,"journal":{"name":"Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129472867","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Words and language are central to most human communication. This paper explores the importance of language for the participatory design of smart home technologies for healthcare. We argue that to effectively involve a broad range of users in the design of new technologies, it is important to actively develop a shared language that is accessible to and owned by all stakeholders, and that facilitates productive dialogues among them. Our discussion is grounded firstly in work with end users, in which problematic language emerged as a key barrier to participation and effective design. Three specific categories of language barriers are identified: jargon, ambiguity, and emotive words. Building on this we undertook a workshop and focus group, respectively involving researchers developing smart health technologies and users, where the focus was on generating a shared language. We discuss this process, including examples that emerged of alternative terminology and specific strategies for creating a shared language.
{"title":"Shared Language and the Design of Home Healthcare Technology","authors":"A. Burrows, R. Gooberman-Hill, D. Coyle","doi":"10.1145/2858036.2858496","DOIUrl":"https://doi.org/10.1145/2858036.2858496","url":null,"abstract":"Words and language are central to most human communication. This paper explores the importance of language for the participatory design of smart home technologies for healthcare. We argue that to effectively involve a broad range of users in the design of new technologies, it is important to actively develop a shared language that is accessible to and owned by all stakeholders, and that facilitates productive dialogues among them. Our discussion is grounded firstly in work with end users, in which problematic language emerged as a key barrier to participation and effective design. Three specific categories of language barriers are identified: jargon, ambiguity, and emotive words. Building on this we undertook a workshop and focus group, respectively involving researchers developing smart health technologies and users, where the focus was on generating a shared language. We discuss this process, including examples that emerged of alternative terminology and specific strategies for creating a shared language.","PeriodicalId":169608,"journal":{"name":"Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129704126","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Many existing studies of social media focus on only one platform, but the reality of users' lived experiences is that most users incorporate multiple platforms into their communication practices in order to access the people and networks they desire to influence. In order to better understand how people make sharing decisions across multiple sites, we asked our participants (N=29) to categorize all modes of communication they used, with the goal of surfacing their mental models about managing sharing across platforms. Our interview data suggest that people simultaneously consider "audience" and "content" when sharing and these needs sometimes compete with one another; that they have the strong desire to both maintain boundaries between platforms as well as allowing content and audience to permeate across these boundaries; and that they strive to stabilize their own communication ecosystem yet need to respond to changes necessitated by the emergence of new tools, practices, and contacts. We unpack the implications of these tensions and suggest future design possibilities.
{"title":"The Social Media Ecology: User Perceptions, Strategies and Challenges","authors":"Xuan Zhao, Cliff Lampe, N. Ellison","doi":"10.1145/2858036.2858333","DOIUrl":"https://doi.org/10.1145/2858036.2858333","url":null,"abstract":"Many existing studies of social media focus on only one platform, but the reality of users' lived experiences is that most users incorporate multiple platforms into their communication practices in order to access the people and networks they desire to influence. In order to better understand how people make sharing decisions across multiple sites, we asked our participants (N=29) to categorize all modes of communication they used, with the goal of surfacing their mental models about managing sharing across platforms. Our interview data suggest that people simultaneously consider \"audience\" and \"content\" when sharing and these needs sometimes compete with one another; that they have the strong desire to both maintain boundaries between platforms as well as allowing content and audience to permeate across these boundaries; and that they strive to stabilize their own communication ecosystem yet need to respond to changes necessitated by the emergence of new tools, practices, and contacts. We unpack the implications of these tensions and suggest future design possibilities.","PeriodicalId":169608,"journal":{"name":"Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129855153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this work, we investigate how remote collaboration between a local worker and a remote collaborator will change if eye fixations of the collaborator are presented to the worker. We track the collaborator's points of gaze on a monitor screen displaying a physical workspace and visualize them onto the space by a projector or through an optical see-through head-mounted display. Through a series of user studies, we have found the followings: 1) Eye fixations can serve as a fast and precise pointer to objects of the collaborator's interest. 2) Eyes and other modalities, such as hand gestures and speech, are used differently for object identification and manipulation. 3) Eyes are used for explicit instructions only when they are combined with speech. 4) The worker can predict some intentions of the collaborator such as his/her current interest and next instruction.
{"title":"Can Eye Help You?: Effects of Visualizing Eye Fixations on Remote Collaboration Scenarios for Physical Tasks","authors":"Keita Higuchi, Ryo Yonetani, Yoichi Sato","doi":"10.1145/2858036.2858438","DOIUrl":"https://doi.org/10.1145/2858036.2858438","url":null,"abstract":"In this work, we investigate how remote collaboration between a local worker and a remote collaborator will change if eye fixations of the collaborator are presented to the worker. We track the collaborator's points of gaze on a monitor screen displaying a physical workspace and visualize them onto the space by a projector or through an optical see-through head-mounted display. Through a series of user studies, we have found the followings: 1) Eye fixations can serve as a fast and precise pointer to objects of the collaborator's interest. 2) Eyes and other modalities, such as hand gestures and speech, are used differently for object identification and manipulation. 3) Eyes are used for explicit instructions only when they are combined with speech. 4) The worker can predict some intentions of the collaborator such as his/her current interest and next instruction.","PeriodicalId":169608,"journal":{"name":"Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129611117","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
EyeGrip proposes a novel and yet simple technique of analysing eye movements for automatically detecting the user's objects of interest in a sequence of visual stimuli moving horizontally or vertically in front of the user's view. We assess the viability of this technique in a scenario where the user looks at a sequence of images moving horizontally on the display while the user's eye movements are tracked by an eye tracker. We conducted an experiment that shows the performance of the proposed approach. We also investigated the influence of the speed and maximum number of visible images in the screen, on the accuracy of EyeGrip. Based on the experiment results, we propose guidelines for designing EyeGrip-based interfaces. EyeGrip can be considered as an implicit gaze interaction technique with potential use in broad range of applications such as large screens, mobile devices and eyewear computers. In this paper, we demonstrate the rich capabilities of EyeGrip with two example applications: 1) a mind reading game, and 2) a picture selection system. Our study shows that by selecting an appropriate speed and maximum number of visible images in the screen the proposed method can be used in a fast scrolling task where the system accurately (87%) detects the moving images that are visually appealing to the user, stops the scrolling and brings the item(s) of interest back to the screen.
{"title":"EyeGrip: Detecting Targets in a Series of Uni-directional Moving Objects Using Optokinetic Nystagmus Eye Movements","authors":"Shahram Jalaliniya, D. Mardanbegi","doi":"10.1145/2858036.2858584","DOIUrl":"https://doi.org/10.1145/2858036.2858584","url":null,"abstract":"EyeGrip proposes a novel and yet simple technique of analysing eye movements for automatically detecting the user's objects of interest in a sequence of visual stimuli moving horizontally or vertically in front of the user's view. We assess the viability of this technique in a scenario where the user looks at a sequence of images moving horizontally on the display while the user's eye movements are tracked by an eye tracker. We conducted an experiment that shows the performance of the proposed approach. We also investigated the influence of the speed and maximum number of visible images in the screen, on the accuracy of EyeGrip. Based on the experiment results, we propose guidelines for designing EyeGrip-based interfaces. EyeGrip can be considered as an implicit gaze interaction technique with potential use in broad range of applications such as large screens, mobile devices and eyewear computers. In this paper, we demonstrate the rich capabilities of EyeGrip with two example applications: 1) a mind reading game, and 2) a picture selection system. Our study shows that by selecting an appropriate speed and maximum number of visible images in the screen the proposed method can be used in a fast scrolling task where the system accurately (87%) detects the moving images that are visually appealing to the user, stops the scrolling and brings the item(s) of interest back to the screen.","PeriodicalId":169608,"journal":{"name":"Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129626267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
G. Mark, Shamsi T. Iqbal, M. Czerwinski, Paul Johns, A. Sano
In HCI research, attention has focused on understanding external influences on workplace multitasking. We explore instead how multitasking might be influenced by individual factors: personality, stress, and sleep. Forty information workers' online activity was tracked over two work weeks. The median duration of online screen focus was 40 seconds. The personality trait of Neuroticism was associated with shorter online focus duration and Impulsivity-Urgency was associated with longer online focus duration. Stress and sleep duration showed trends to be inversely associated with online focus. Shorter focus duration was associated with lower assessed productivity at day's end. Factor analysis revealed a factor of lack of control which significantly predicts multitasking. Our results suggest that there could be a trait for distractibility where some individuals are susceptible to online attention shifting in the workplace. Our results have implications for information systems (e.g. educational systems, game design) where attention focus is key.
{"title":"Neurotics Can't Focus: An in situ Study of Online Multitasking in the Workplace","authors":"G. Mark, Shamsi T. Iqbal, M. Czerwinski, Paul Johns, A. Sano","doi":"10.1145/2858036.2858202","DOIUrl":"https://doi.org/10.1145/2858036.2858202","url":null,"abstract":"In HCI research, attention has focused on understanding external influences on workplace multitasking. We explore instead how multitasking might be influenced by individual factors: personality, stress, and sleep. Forty information workers' online activity was tracked over two work weeks. The median duration of online screen focus was 40 seconds. The personality trait of Neuroticism was associated with shorter online focus duration and Impulsivity-Urgency was associated with longer online focus duration. Stress and sleep duration showed trends to be inversely associated with online focus. Shorter focus duration was associated with lower assessed productivity at day's end. Factor analysis revealed a factor of lack of control which significantly predicts multitasking. Our results suggest that there could be a trait for distractibility where some individuals are susceptible to online attention shifting in the workplace. Our results have implications for information systems (e.g. educational systems, game design) where attention focus is key.","PeriodicalId":169608,"journal":{"name":"Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems","volume":"162 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127376766","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
C. D. Souza, Fernando Marques Figueira Filho, Müller Miranda, Renato Ferreira, Christoph Treude, Leif Singer
Software ecosystems as a paradigm for large-scale software development encompass a complex mix of technical, business, and social aspects. While significant research has been conducted to understand both the technical and business aspects, the social aspects of software ecosystems are less well understood. To close this gap, this paper presents the results of an empirical study aimed at understanding the influence of social aspects on developers' participation in software ecosystems. We conducted 25 interviews with mobile software developers and an online survey with 83 respondents from the mobile software development community. Our results point out a complex social system based on continued interaction and mutual support between different actors, including developers, friends, end users, developers from large companies, and online communities. These findings highlight the importance of social aspects in the sustainability of software ecosystems both during the initial adoption phase as well as for long-term permanence of developers.
{"title":"The Social Side of Software Platform Ecosystems","authors":"C. D. Souza, Fernando Marques Figueira Filho, Müller Miranda, Renato Ferreira, Christoph Treude, Leif Singer","doi":"10.1145/2858036.2858431","DOIUrl":"https://doi.org/10.1145/2858036.2858431","url":null,"abstract":"Software ecosystems as a paradigm for large-scale software development encompass a complex mix of technical, business, and social aspects. While significant research has been conducted to understand both the technical and business aspects, the social aspects of software ecosystems are less well understood. To close this gap, this paper presents the results of an empirical study aimed at understanding the influence of social aspects on developers' participation in software ecosystems. We conducted 25 interviews with mobile software developers and an online survey with 83 respondents from the mobile software development community. Our results point out a complex social system based on continued interaction and mutual support between different actors, including developers, friends, end users, developers from large companies, and online communities. These findings highlight the importance of social aspects in the sustainability of software ecosystems both during the initial adoption phase as well as for long-term permanence of developers.","PeriodicalId":169608,"journal":{"name":"Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems","volume":"165 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127408557","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}