We are building a topic-based, interactive visual analytic tool that aids users in analyzing large collections of text. To help users quickly discover content evolution and significant content transitions within a topic over time, here we present a novel, constraint-based approach to temporal topic segmentation. Our solution splits a discovered topic into multiple linear, non-overlapping sub-topics along a timeline by satisfying a diverse set of semantic, temporal, and visualization constraints simultaneously. For each derived sub-topic, our solution also automatically selects a set of representative keywords to summarize the main content of the sub-topic. Our extensive evaluation, including a crowd-sourced user study, demonstrates the effectiveness of our method over an existing baseline.
{"title":"Optimizing temporal topic segmentation for intelligent text visualization","authors":"Shimei Pan, Michelle X. Zhou, Yangqiu Song, Weihong Qian, Fei Wang, Shixia Liu","doi":"10.1145/2449396.2449441","DOIUrl":"https://doi.org/10.1145/2449396.2449441","url":null,"abstract":"We are building a topic-based, interactive visual analytic tool that aids users in analyzing large collections of text. To help users quickly discover content evolution and significant content transitions within a topic over time, here we present a novel, constraint-based approach to temporal topic segmentation. Our solution splits a discovered topic into multiple linear, non-overlapping sub-topics along a timeline by satisfying a diverse set of semantic, temporal, and visualization constraints simultaneously. For each derived sub-topic, our solution also automatically selects a set of representative keywords to summarize the main content of the sub-topic. Our extensive evaluation, including a crowd-sourced user study, demonstrates the effectiveness of our method over an existing baseline.","PeriodicalId":87287,"journal":{"name":"IUI. International Conference on Intelligent User Interfaces","volume":"77 1","pages":"339-350"},"PeriodicalIF":0.0,"publicationDate":"2013-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76973054","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
D. Glowacka, Tuukka Ruotsalo, Ksenia Konyushkova, Kumaripaba Athukorala, Samuel Kaski, Giulio Jacucci
Techniques for both exploratory and known item search tend to direct only to more specific subtopics or individual documents, as opposed to allowing directing the exploration of the information space. We present an interactive information retrieval system that combines Reinforcement Learning techniques along with a novel user interface design to allow active engagement of users in directing the search. Users can directly manipulate document features (keywords) to indicate their interests and Reinforcement Learning is used to model the user by allowing the system to trade off between exploration and exploitation. This gives users the opportunity to more effectively direct their search nearer, further and following a direction. A task-based user study conducted with 20 participants comparing our system to a traditional query-based baseline indicates that our system significantly improves the effectiveness of information retrieval by providing access to more relevant and novel information without having to spend more time acquiring the information.
{"title":"Directing exploratory search: reinforcement learning from user interactions with keywords","authors":"D. Glowacka, Tuukka Ruotsalo, Ksenia Konyushkova, Kumaripaba Athukorala, Samuel Kaski, Giulio Jacucci","doi":"10.1145/2449396.2449413","DOIUrl":"https://doi.org/10.1145/2449396.2449413","url":null,"abstract":"Techniques for both exploratory and known item search tend to direct only to more specific subtopics or individual documents, as opposed to allowing directing the exploration of the information space. We present an interactive information retrieval system that combines Reinforcement Learning techniques along with a novel user interface design to allow active engagement of users in directing the search. Users can directly manipulate document features (keywords) to indicate their interests and Reinforcement Learning is used to model the user by allowing the system to trade off between exploration and exploitation. This gives users the opportunity to more effectively direct their search nearer, further and following a direction. A task-based user study conducted with 20 participants comparing our system to a traditional query-based baseline indicates that our system significantly improves the effectiveness of information retrieval by providing access to more relevant and novel information without having to spend more time acquiring the information.","PeriodicalId":87287,"journal":{"name":"IUI. International Conference on Intelligent User Interfaces","volume":"18 1","pages":"117-128"},"PeriodicalIF":0.0,"publicationDate":"2013-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78334273","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Francine Chen, S. Carter, Laurent Denoue, J. Kumar
People frequently capture photos with their smartphones, and some are starting to capture images of documents. However, the quality of captured document images is often lower than expected, even when an application that performs post-processing to improve the image is used. To improve the quality of captured images before post-processing, we developed the Smart Document Capture (SmartDCap) application that provides real-time feedback to users about the likely quality of a captured image. The quality measures capture the sharpness and framing of a page or regions on a page, such as a set of one or more columns, a part of a column, a figure, or a table. Using our approach, while users adjust the camera position, the application automatically determines when to take a picture of a document to produce a good quality result. We performed a subjective evaluation comparing SmartDCap and the Android Ice Cream Sandwich (ICS) camera application; we also used raters to evaluate the quality of the captured images. Our results indicate that users find SmartDCap to be as easy to use as the standard ICS camera application. Also, images captured using SmartDCap are sharper and better framed on average than images using the ICS camera application.
{"title":"SmartDCap: semi-automatic capture of higher quality document images from a smartphone","authors":"Francine Chen, S. Carter, Laurent Denoue, J. Kumar","doi":"10.1145/2449396.2449433","DOIUrl":"https://doi.org/10.1145/2449396.2449433","url":null,"abstract":"People frequently capture photos with their smartphones, and some are starting to capture images of documents. However, the quality of captured document images is often lower than expected, even when an application that performs post-processing to improve the image is used. To improve the quality of captured images before post-processing, we developed the Smart Document Capture (SmartDCap) application that provides real-time feedback to users about the likely quality of a captured image. The quality measures capture the sharpness and framing of a page or regions on a page, such as a set of one or more columns, a part of a column, a figure, or a table. Using our approach, while users adjust the camera position, the application automatically determines when to take a picture of a document to produce a good quality result. We performed a subjective evaluation comparing SmartDCap and the Android Ice Cream Sandwich (ICS) camera application; we also used raters to evaluate the quality of the captured images. Our results indicate that users find SmartDCap to be as easy to use as the standard ICS camera application. Also, images captured using SmartDCap are sharper and better framed on average than images using the ICS camera application.","PeriodicalId":87287,"journal":{"name":"IUI. International Conference on Intelligent User Interfaces","volume":"60 1","pages":"287-296"},"PeriodicalIF":0.0,"publicationDate":"2013-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74655565","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Svetlin Bostandjiev, J. O'Donovan, Tobias Höllerer
This paper presents LinkedVis, an interactive visual recommender system that combines social and semantic knowledge to produce career recommendations based on the LinkedIn API. A collaborative (social) approach is employed to identify professionals with similar career paths and produce personalized recommendations of both companies and roles. To unify semantically identical but lexically distinct entities and arrive at better user models, we employ lightweight natural language processing and entity resolution using semantic information from a variety of end-points on the web. Elements from the underlying recommendation algorithm are exposed through an interactive interface that allows users to manipulate different aspects of the algorithm and the data it operates on, allowing users to explore a variety of "what-if" scenarios around their current profile. We evaluate LinkedVis through leave-one-out accuracy and diversity experiments on a data corpus collected from 47 users and their LinkedIn connections, as well as through a supervised study of 27 users exploring their own profile and recommendations interactively. Results show that our approach outperforms a benchmark recommendation algorithm without semantic resolution in terms of accuracy and diversity, and that the ability to tweak recommendations interactively by adjusting profile item and social connection weights further improves predictive accuracy. Questionnaires on the user experience with the explanatory and interactive aspects of the application reveal very high user acceptance and satisfaction.
{"title":"LinkedVis: exploring social and semantic career recommendations","authors":"Svetlin Bostandjiev, J. O'Donovan, Tobias Höllerer","doi":"10.1145/2449396.2449412","DOIUrl":"https://doi.org/10.1145/2449396.2449412","url":null,"abstract":"This paper presents LinkedVis, an interactive visual recommender system that combines social and semantic knowledge to produce career recommendations based on the LinkedIn API. A collaborative (social) approach is employed to identify professionals with similar career paths and produce personalized recommendations of both companies and roles. To unify semantically identical but lexically distinct entities and arrive at better user models, we employ lightweight natural language processing and entity resolution using semantic information from a variety of end-points on the web. Elements from the underlying recommendation algorithm are exposed through an interactive interface that allows users to manipulate different aspects of the algorithm and the data it operates on, allowing users to explore a variety of \"what-if\" scenarios around their current profile. We evaluate LinkedVis through leave-one-out accuracy and diversity experiments on a data corpus collected from 47 users and their LinkedIn connections, as well as through a supervised study of 27 users exploring their own profile and recommendations interactively. Results show that our approach outperforms a benchmark recommendation algorithm without semantic resolution in terms of accuracy and diversity, and that the ability to tweak recommendations interactively by adjusting profile item and social connection weights further improves predictive accuracy. Questionnaires on the user experience with the explanatory and interactive aspects of the application reveal very high user acceptance and satisfaction.","PeriodicalId":87287,"journal":{"name":"IUI. International Conference on Intelligent User Interfaces","volume":"15 1","pages":"107-116"},"PeriodicalIF":0.0,"publicationDate":"2013-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83629073","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Information Visualization systems have traditionally followed a one-size-fits-all model, typically ignoring an individual user's needs, abilities and preferences. However, recent research has indicated that visualization performance could be improved by adapting aspects of the visualization to each individual user. To this end, this paper presents research aimed at supporting the design of novel user-adaptive visualization systems. In particular, we discuss results on using information on user eye gaze patterns while interacting with a given visualization to predict the user's visualization tasks, as well as user cognitive abilities including perceptual speed, visual working memory, and verbal working memory. We show that such predictions are significantly better than a baseline classifier even during the early stages of visualization usage. These findings are discussed in view of designing visualization systems that can adapt to each individual user in real-time.
{"title":"User-adaptive information visualization: using eye gaze data to infer visualization tasks and user cognitive abilities","authors":"B. Steichen, G. Carenini, C. Conati","doi":"10.1145/2449396.2449439","DOIUrl":"https://doi.org/10.1145/2449396.2449439","url":null,"abstract":"Information Visualization systems have traditionally followed a one-size-fits-all model, typically ignoring an individual user's needs, abilities and preferences. However, recent research has indicated that visualization performance could be improved by adapting aspects of the visualization to each individual user. To this end, this paper presents research aimed at supporting the design of novel user-adaptive visualization systems. In particular, we discuss results on using information on user eye gaze patterns while interacting with a given visualization to predict the user's visualization tasks, as well as user cognitive abilities including perceptual speed, visual working memory, and verbal working memory. We show that such predictions are significantly better than a baseline classifier even during the early stages of visualization usage. These findings are discussed in view of designing visualization systems that can adapt to each individual user in real-time.","PeriodicalId":87287,"journal":{"name":"IUI. International Conference on Intelligent User Interfaces","volume":"21 1","pages":"317-328"},"PeriodicalIF":0.0,"publicationDate":"2013-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82594134","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A day in the life of a user can be segmented into a series of tasks: a user begins a task, becomes loaded perceptually and cognitively to some extent by the objects and mental challenge that comprise that task, then at some point switches or is distracted to a new task, and so on. Understanding the contextual task characteristics and user behavior in interaction can benefit the development of intelligent systems to aid user task management. Applications that aid the user in one way or another have proliferated as computing devices become more and more of a constant companion. However, direct and continuous observations of individual tasks in a naturalistic context and subsequent task analysis, for example the diary method, have traditionally been a manual process. We propose a method for automatic task analysis system, which monitors the user's current task and analyzes it in terms of the task transition, and perceptual and cognitive load imposed by the task. An experiment was conducted in which participants were required to work continuously on groups of three sequential tasks of different types. Three classes of eye activity, namely pupillary response, blink and eye movement, were analyzed to detect the task transition and non-transition states, and to estimate three levels of perceptual load and three levels of cognitive load every second to infer task characteristics. This paper reports statistically significant classification accuracies in all cases and demonstrates the feasibility of this approach for task monitoring and analysis.
{"title":"Automatic and continuous user task analysis via eye activity","authors":"Siyuan Chen, J. Epps, Fang Chen","doi":"10.1145/2449396.2449406","DOIUrl":"https://doi.org/10.1145/2449396.2449406","url":null,"abstract":"A day in the life of a user can be segmented into a series of tasks: a user begins a task, becomes loaded perceptually and cognitively to some extent by the objects and mental challenge that comprise that task, then at some point switches or is distracted to a new task, and so on. Understanding the contextual task characteristics and user behavior in interaction can benefit the development of intelligent systems to aid user task management. Applications that aid the user in one way or another have proliferated as computing devices become more and more of a constant companion. However, direct and continuous observations of individual tasks in a naturalistic context and subsequent task analysis, for example the diary method, have traditionally been a manual process. We propose a method for automatic task analysis system, which monitors the user's current task and analyzes it in terms of the task transition, and perceptual and cognitive load imposed by the task. An experiment was conducted in which participants were required to work continuously on groups of three sequential tasks of different types. Three classes of eye activity, namely pupillary response, blink and eye movement, were analyzed to detect the task transition and non-transition states, and to estimate three levels of perceptual load and three levels of cognitive load every second to infer task characteristics. This paper reports statistically significant classification accuracies in all cases and demonstrates the feasibility of this approach for task monitoring and analysis.","PeriodicalId":87287,"journal":{"name":"IUI. International Conference on Intelligent User Interfaces","volume":"194 1","pages":"57-66"},"PeriodicalIF":0.0,"publicationDate":"2013-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76876068","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Personalization relies on personal data about each individual user. Users are quite often reluctant though to disclose information about themselves and to be "tracked" by a system. We investigated whether different types of rationales (justifications) for disclosure that have been suggested in the privacy literature would increase users' willingness to divulge demographic and contextual information about themselves, and would raise their satisfaction with the system. We also looked at the effect of the order of requests, owing to findings from the literature. Our experiment with a mockup of a mobile app recommender shows that there is no single strategy that is optimal for everyone. Heuristics can be defined though that select for each user the most effective justification to raise disclosure or satisfaction, taking the user's gender, disclosure tendency, and the type of solicited personal information into account. We discuss the implications of these findings for research aimed at personalizing privacy strategies to each individual user.
{"title":"Helping users with information disclosure decisions: potential for adaptation","authors":"Bart P. Knijnenburg, A. Kobsa","doi":"10.1145/2449396.2449448","DOIUrl":"https://doi.org/10.1145/2449396.2449448","url":null,"abstract":"Personalization relies on personal data about each individual user. Users are quite often reluctant though to disclose information about themselves and to be \"tracked\" by a system. We investigated whether different types of rationales (justifications) for disclosure that have been suggested in the privacy literature would increase users' willingness to divulge demographic and contextual information about themselves, and would raise their satisfaction with the system. We also looked at the effect of the order of requests, owing to findings from the literature. Our experiment with a mockup of a mobile app recommender shows that there is no single strategy that is optimal for everyone. Heuristics can be defined though that select for each user the most effective justification to raise disclosure or satisfaction, taking the user's gender, disclosure tendency, and the type of solicited personal information into account. We discuss the implications of these findings for research aimed at personalizing privacy strategies to each individual user.","PeriodicalId":87287,"journal":{"name":"IUI. International Conference on Intelligent User Interfaces","volume":"44 1","pages":"407-416"},"PeriodicalIF":0.0,"publicationDate":"2013-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80641701","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Stuart Moran, Nadia Pantidi, K. Bachour, J. Fischer, Martin Flintham, T. Rodden, Simon Evans, Simon Johnson
The assumed role of humans as controllers and instructors of machines is changing. As systems become more complex and incomprehensible to humans, it will be increasingly necessary for us to place confidence in intelligent interfaces and follow their instructions and recommendations. This type of relationship becomes particularly intricate when we consider significant numbers of humans and agents working together in collectives. While instruction-based interfaces and agents already exist, our understanding of them within the field of Human-Computer Interaction is still limited. As such, we developed a large-scale pervasive game called 'Cargo', where a semi-autonomous ruled-based agent distributes a number of text-to-speech instructions to multiple teams of players via their mobile phone as an interface. We describe how people received, negotiated and acted upon the instructions in the game both individually and as a team and how players initial plans and expectations shaped their understanding of the instructions.
{"title":"Team reactions to voiced agent instructions in a pervasive game","authors":"Stuart Moran, Nadia Pantidi, K. Bachour, J. Fischer, Martin Flintham, T. Rodden, Simon Evans, Simon Johnson","doi":"10.1145/2449396.2449445","DOIUrl":"https://doi.org/10.1145/2449396.2449445","url":null,"abstract":"The assumed role of humans as controllers and instructors of machines is changing. As systems become more complex and incomprehensible to humans, it will be increasingly necessary for us to place confidence in intelligent interfaces and follow their instructions and recommendations. This type of relationship becomes particularly intricate when we consider significant numbers of humans and agents working together in collectives. While instruction-based interfaces and agents already exist, our understanding of them within the field of Human-Computer Interaction is still limited.\u0000 As such, we developed a large-scale pervasive game called 'Cargo', where a semi-autonomous ruled-based agent distributes a number of text-to-speech instructions to multiple teams of players via their mobile phone as an interface. We describe how people received, negotiated and acted upon the instructions in the game both individually and as a team and how players initial plans and expectations shaped their understanding of the instructions.","PeriodicalId":87287,"journal":{"name":"IUI. International Conference on Intelligent User Interfaces","volume":"14 1","pages":"371-382"},"PeriodicalIF":0.0,"publicationDate":"2013-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85990005","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In natural language, there are many gaps between what is stated and what is understood. Speakers and listeners fill in these gaps, presumably from some life experience, but no one knows how to get this experiential data into a computer. As a first step, we have created a methodology and software interface for collecting commonsense data about simple experiences. This work is intended to form the basis of a new resource for natural language processing. We model experience as a sequence of comic frames, annotated with the changing intentional and physical states of the characters and objects. To create an annotated experience, our software interface guides non-experts in identifying facts about experiences that humans normally take for granted. As part of this process, the system asks questions using the Socratic Method to help users notice difficult-to-articulate commonsense data. A test on ten subjects indicates that non-experts are able to produce high quality experiential data.
{"title":"Mind the gap: collecting commonsense data about simple experiences","authors":"J. Weltman, S. S. Iyengar, Michael Hegarty","doi":"10.1145/2449396.2449421","DOIUrl":"https://doi.org/10.1145/2449396.2449421","url":null,"abstract":"In natural language, there are many gaps between what is stated and what is understood. Speakers and listeners fill in these gaps, presumably from some life experience, but no one knows how to get this experiential data into a computer. As a first step, we have created a methodology and software interface for collecting commonsense data about simple experiences. This work is intended to form the basis of a new resource for natural language processing.\u0000 We model experience as a sequence of comic frames, annotated with the changing intentional and physical states of the characters and objects. To create an annotated experience, our software interface guides non-experts in identifying facts about experiences that humans normally take for granted. As part of this process, the system asks questions using the Socratic Method to help users notice difficult-to-articulate commonsense data. A test on ten subjects indicates that non-experts are able to produce high quality experiential data.","PeriodicalId":87287,"journal":{"name":"IUI. International Conference on Intelligent User Interfaces","volume":"24 1","pages":"179-190"},"PeriodicalIF":0.0,"publicationDate":"2013-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84472222","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ali Jahanian, Jerry Liu, Qian Lin, D. Tretter, Eamonn O'Brien-Strain, S. Lee, Nic Lyons, J. Allebach
In this paper, we present a recommendation system for the automatic design of magazine covers. Our users are non-designer designers: individuals or small and medium businesses who want to design without hiring a professional designer while still wanting to create aesthetically compelling designs. Because a design should have a purpose, we suggest a number of semantic features to the user, e.g., "clean and clear," "dynamic and active," or "formal," to describe the color mood for the purpose of his/her design. Based on these high level features and a number of low level features, such as the complexity of the visual balance in a photo, our system selects the best photos from the user's album for his/her design. Our system then generates several alternative designs that can be rated by the user. Consequently, our system generates future designs based on the user's style. In this fashion, our system personalizes the designs of a user based on his/her preferences.
{"title":"Recommendation system for automatic design of magazine covers","authors":"Ali Jahanian, Jerry Liu, Qian Lin, D. Tretter, Eamonn O'Brien-Strain, S. Lee, Nic Lyons, J. Allebach","doi":"10.1145/2449396.2449411","DOIUrl":"https://doi.org/10.1145/2449396.2449411","url":null,"abstract":"In this paper, we present a recommendation system for the automatic design of magazine covers. Our users are non-designer designers: individuals or small and medium businesses who want to design without hiring a professional designer while still wanting to create aesthetically compelling designs. Because a design should have a purpose, we suggest a number of semantic features to the user, e.g., \"clean and clear,\" \"dynamic and active,\" or \"formal,\" to describe the color mood for the purpose of his/her design. Based on these high level features and a number of low level features, such as the complexity of the visual balance in a photo, our system selects the best photos from the user's album for his/her design. Our system then generates several alternative designs that can be rated by the user. Consequently, our system generates future designs based on the user's style. In this fashion, our system personalizes the designs of a user based on his/her preferences.","PeriodicalId":87287,"journal":{"name":"IUI. International Conference on Intelligent User Interfaces","volume":"9 1","pages":"95-106"},"PeriodicalIF":0.0,"publicationDate":"2013-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84250183","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}