M. Landoni, M. S. Pera, Emiliana Murgia, T. Huibers
In the classroom, children mainly use general search systems such as Google, Baidu or Bing. For many years and from different perspectives, a call has been made that it is necessary to provide children in an educational context with child-friendly search systems. Research responding to this call often focuses on the relevance, readability and reliability of the retrieved documents. Instead, inspired by a recent study based on adult users on the role emotions play in web search, we explore whether and how children searching in a school context react to the emotional content often part of Search Engine Result Pages. We do so by examining emotions inferred from queries and corresponding retrieved resources in query logs produced by children ages 9 to 11 in a classroom setting in 3 different countries. We also consider teachers' observations that contextualize this analysis.
{"title":"Inside Out: Exploring the Emotional Side of Search Engines in the Classroom","authors":"M. Landoni, M. S. Pera, Emiliana Murgia, T. Huibers","doi":"10.1145/3340631.3394847","DOIUrl":"https://doi.org/10.1145/3340631.3394847","url":null,"abstract":"In the classroom, children mainly use general search systems such as Google, Baidu or Bing. For many years and from different perspectives, a call has been made that it is necessary to provide children in an educational context with child-friendly search systems. Research responding to this call often focuses on the relevance, readability and reliability of the retrieved documents. Instead, inspired by a recent study based on adult users on the role emotions play in web search, we explore whether and how children searching in a school context react to the emotional content often part of Search Engine Result Pages. We do so by examining emotions inferred from queries and corresponding retrieved resources in query logs produced by children ages 9 to 11 in a classroom setting in 3 different countries. We also consider teachers' observations that contextualize this analysis.","PeriodicalId":417607,"journal":{"name":"Proceedings of the 28th ACM Conference on User Modeling, Adaptation and Personalization","volume":"32 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116343599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
While online content is personalized to an increasing degree, eg. using recommender systems (RS), the rationale behind personalization and how users can adjust it typically remains opaque. This was often observed to have negative effects on the user experience and perceived quality of RS. As a result, research increasingly has taken user-centric aspects such as transparency and control of a RS into account, when assessing its quality. However, we argue that too little of this research has investigated the users' perception and understanding of RS in their entirety. In this paper, we explore the users' mental models of RS. More specifically, we followed the qualitative grounded theory methodology and conducted 10 semi-structured face-to-face interviews with typical and regular Netflix users. During interviews participants expressed high levels of uncertainty and confusion about the RS in Netflix. Consequently, we found a broad range of different mental models. Nevertheless, we also identified a general structure underlying all of these models, consisting of four steps: data acquisition, inference of user profile, comparison of user profiles or items, and generation of recommendations. Based on our findings, we discuss implications to design more transparent, controllable, and user friendly RS in the future.
{"title":"Exploring Mental Models for Transparent and Controllable Recommender Systems: A Qualitative Study","authors":"Thao Ngo, Johannes Kunkel, J. Ziegler","doi":"10.1145/3340631.3394841","DOIUrl":"https://doi.org/10.1145/3340631.3394841","url":null,"abstract":"While online content is personalized to an increasing degree, eg. using recommender systems (RS), the rationale behind personalization and how users can adjust it typically remains opaque. This was often observed to have negative effects on the user experience and perceived quality of RS. As a result, research increasingly has taken user-centric aspects such as transparency and control of a RS into account, when assessing its quality. However, we argue that too little of this research has investigated the users' perception and understanding of RS in their entirety. In this paper, we explore the users' mental models of RS. More specifically, we followed the qualitative grounded theory methodology and conducted 10 semi-structured face-to-face interviews with typical and regular Netflix users. During interviews participants expressed high levels of uncertainty and confusion about the RS in Netflix. Consequently, we found a broad range of different mental models. Nevertheless, we also identified a general structure underlying all of these models, consisting of four steps: data acquisition, inference of user profile, comparison of user profiles or items, and generation of recommendations. Based on our findings, we discuss implications to design more transparent, controllable, and user friendly RS in the future.","PeriodicalId":417607,"journal":{"name":"Proceedings of the 28th ACM Conference on User Modeling, Adaptation and Personalization","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128749304","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
B. Mobasher, S. Kleanthous, Michael D. Ekstrand, Bettina Berendt, Jahna Otterbacher, Avital Shulner Tal
The 3rd FairUMAP workshop brings together researchers working at the intersection of user modeling, adaptation, and personalization on the one hand, and bias, fairness and transparency in algorithmic systems on the other hand.
{"title":"FairUMAP 2020: The 3rd Workshop on Fairness in User Modeling, Adaptation and Personalization","authors":"B. Mobasher, S. Kleanthous, Michael D. Ekstrand, Bettina Berendt, Jahna Otterbacher, Avital Shulner Tal","doi":"10.1145/3340631.3398671","DOIUrl":"https://doi.org/10.1145/3340631.3398671","url":null,"abstract":"The 3rd FairUMAP workshop brings together researchers working at the intersection of user modeling, adaptation, and personalization on the one hand, and bias, fairness and transparency in algorithmic systems on the other hand.","PeriodicalId":417607,"journal":{"name":"Proceedings of the 28th ACM Conference on User Modeling, Adaptation and Personalization","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125845855","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Eliciting the preferences and needs of tourists is challenging, since people often have difficulties to explicitly express them -- especially in the initial phase of travel planning. Recommender systems employed at the early stage of planning can therefore be very beneficial to the general satisfaction of a user. Previous studies have explored pictures as a tool of communication and as a way to implicitly deduce a traveller's preferences and needs. In this paper, we conduct a user study to verify previous claims and conceptual work on the feasibility of modelling travel interests from a selection of a user's pictures. We utilize fine-tuned convolutional neural networks to compute a vector representation of a picture, where each dimension corresponds to a travel behavioural pattern from the traditional Seven-Factor model. In our study, we followed strict privacy principles and did not save uploaded pictures after computing their vector representation. We aggregate the representations of the pictures of a user into a single user representation, i.e., touristic profile, using different strategies. In our user study with 81 participants, we let users adjust the predicted touristic profile and confirm the usefulness of our approach. Our results show that given a collection of pictures the touristic profile of a user can be determined.
{"title":"Eliciting Touristic Profiles: A User Study on Picture Collections","authors":"Mete Sertkan, J. Neidhardt, H. Werthner","doi":"10.1145/3340631.3394868","DOIUrl":"https://doi.org/10.1145/3340631.3394868","url":null,"abstract":"Eliciting the preferences and needs of tourists is challenging, since people often have difficulties to explicitly express them -- especially in the initial phase of travel planning. Recommender systems employed at the early stage of planning can therefore be very beneficial to the general satisfaction of a user. Previous studies have explored pictures as a tool of communication and as a way to implicitly deduce a traveller's preferences and needs. In this paper, we conduct a user study to verify previous claims and conceptual work on the feasibility of modelling travel interests from a selection of a user's pictures. We utilize fine-tuned convolutional neural networks to compute a vector representation of a picture, where each dimension corresponds to a travel behavioural pattern from the traditional Seven-Factor model. In our study, we followed strict privacy principles and did not save uploaded pictures after computing their vector representation. We aggregate the representations of the pictures of a user into a single user representation, i.e., touristic profile, using different strategies. In our user study with 81 participants, we let users adjust the predicted touristic profile and confirm the usefulness of our approach. Our results show that given a collection of pictures the touristic profile of a user can be determined.","PeriodicalId":417607,"journal":{"name":"Proceedings of the 28th ACM Conference on User Modeling, Adaptation and Personalization","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130168220","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Increasing aggregate diversity (or catalog coverage) is an important system-level objective in many recommendation domains where it may be desirable to mitigate the popularity bias and to improve the coverage of long-tail items in recommendations given to users. This is especially important in multistakeholder recommendation scenarios where it may be important to optimize utilities not just for the end user, but also for other stakeholders such as item sellers or producers who desire a fair representation of their items across recommendation lists produced by the system. Unfortunately, attempts to increase aggregate diversity often result in lower recommendation accuracy for end users. Thus, addressing this problem requires an approach that can effectively manage the trade-offs between accuracy and aggregate diversity. In this work, we propose a two-sided post-processing approach in which both user and item utilities are considered. Our goal is to maximize aggregate diversity while minimizing loss in recommendation accuracy. Our solution is a generalization of the Deferred Acceptance algorithm which was proposed as an efficient algorithm to solve the well-known stable matching problem. We prove that our algorithm results in a unique user-optimal stable match between items and users. Using three recommendation datasets, we empirically demonstrate the effectiveness of our approach in comparison to several baselines. In particular, our results show that the proposed solution is quite effective in increasing aggregate diversity and item-side utility while optimizing recommendation accuracy for end users.
{"title":"Using Stable Matching to Optimize the Balance between Accuracy and Diversity in Recommendation","authors":"Farzad Eskandanian, B. Mobasher","doi":"10.1145/3340631.3394858","DOIUrl":"https://doi.org/10.1145/3340631.3394858","url":null,"abstract":"Increasing aggregate diversity (or catalog coverage) is an important system-level objective in many recommendation domains where it may be desirable to mitigate the popularity bias and to improve the coverage of long-tail items in recommendations given to users. This is especially important in multistakeholder recommendation scenarios where it may be important to optimize utilities not just for the end user, but also for other stakeholders such as item sellers or producers who desire a fair representation of their items across recommendation lists produced by the system. Unfortunately, attempts to increase aggregate diversity often result in lower recommendation accuracy for end users. Thus, addressing this problem requires an approach that can effectively manage the trade-offs between accuracy and aggregate diversity. In this work, we propose a two-sided post-processing approach in which both user and item utilities are considered. Our goal is to maximize aggregate diversity while minimizing loss in recommendation accuracy. Our solution is a generalization of the Deferred Acceptance algorithm which was proposed as an efficient algorithm to solve the well-known stable matching problem. We prove that our algorithm results in a unique user-optimal stable match between items and users. Using three recommendation datasets, we empirically demonstrate the effectiveness of our approach in comparison to several baselines. In particular, our results show that the proposed solution is quite effective in increasing aggregate diversity and item-side utility while optimizing recommendation accuracy for end users.","PeriodicalId":417607,"journal":{"name":"Proceedings of the 28th ACM Conference on User Modeling, Adaptation and Personalization","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133665184","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nasim Sonboli, Farzad Eskandanian, R. Burke, Weiwen Liu, B. Mobasher
As recommender systems have become more widespread and moved into areas with greater social impact, such as employment and housing, researchers have begun to seek ways to ensure fairness in the results that such systems produce. This work has primarily focused on developing recommendation approaches in which fairness metrics are jointly optimized along with recommendation accuracy. However, the previous work had largely ignored how individual preferences may limit the ability of an algorithm to produce fair recommendations. Furthermore, with few exceptions, researchers have only considered scenarios in which fairness is measured relative to a single sensitive feature or attribute (such as race or gender). In this paper, we present a re-ranking approach to fairness-aware recommendation that learns individual preferences across multiple fairness dimensions and uses them to enhance provider fairness in recommendation results. Specifically, we show that our opportunistic and metric-agnostic approach achieves a better trade-off between accuracy and fairness than prior re-ranking approaches and does so across multiple fairness dimensions.
{"title":"Opportunistic Multi-aspect Fairness through Personalized Re-ranking","authors":"Nasim Sonboli, Farzad Eskandanian, R. Burke, Weiwen Liu, B. Mobasher","doi":"10.1145/3340631.3394846","DOIUrl":"https://doi.org/10.1145/3340631.3394846","url":null,"abstract":"As recommender systems have become more widespread and moved into areas with greater social impact, such as employment and housing, researchers have begun to seek ways to ensure fairness in the results that such systems produce. This work has primarily focused on developing recommendation approaches in which fairness metrics are jointly optimized along with recommendation accuracy. However, the previous work had largely ignored how individual preferences may limit the ability of an algorithm to produce fair recommendations. Furthermore, with few exceptions, researchers have only considered scenarios in which fairness is measured relative to a single sensitive feature or attribute (such as race or gender). In this paper, we present a re-ranking approach to fairness-aware recommendation that learns individual preferences across multiple fairness dimensions and uses them to enhance provider fairness in recommendation results. Specifically, we show that our opportunistic and metric-agnostic approach achieves a better trade-off between accuracy and fairness than prior re-ranking approaches and does so across multiple fairness dimensions.","PeriodicalId":417607,"journal":{"name":"Proceedings of the 28th ACM Conference on User Modeling, Adaptation and Personalization","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130354676","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Social collaborative platforms such as GitHub and Stack Overflow have been increasingly used to improve work productivity via collaborative efforts. To improve user experiences in these platforms, it is desirable to have a recommender system that can suggest not only items (e.g., a GitHub repository) to a user, but also activities to be performed on the suggested items (e.g., forking a repository). To this end, we propose a new approach dubbed Keen2Act, which decomposes the recommendation problem into two stages: the Keen and Act steps. The Keen step identifies, for a given user, a (sub)set of items in which he/she is likely to be interested. The Act step then recommends to the user which activities to perform on the identified set of items. This decomposition provides a practical approach to tackling complex activity recommendation tasks while producing higher recommendation quality. We evaluate our proposed approach using two real-world datasets and obtain promising results whereby Keen2Act outperforms several baseline models.
{"title":"Keen2Act: Activity Recommendation in Online Social Collaborative Platforms","authors":"R. Lee, Thong Hoang, R. J. Oentaryo, David Lo","doi":"10.1145/3340631.3394884","DOIUrl":"https://doi.org/10.1145/3340631.3394884","url":null,"abstract":"Social collaborative platforms such as GitHub and Stack Overflow have been increasingly used to improve work productivity via collaborative efforts. To improve user experiences in these platforms, it is desirable to have a recommender system that can suggest not only items (e.g., a GitHub repository) to a user, but also activities to be performed on the suggested items (e.g., forking a repository). To this end, we propose a new approach dubbed Keen2Act, which decomposes the recommendation problem into two stages: the Keen and Act steps. The Keen step identifies, for a given user, a (sub)set of items in which he/she is likely to be interested. The Act step then recommends to the user which activities to perform on the identified set of items. This decomposition provides a practical approach to tackling complex activity recommendation tasks while producing higher recommendation quality. We evaluate our proposed approach using two real-world datasets and obtain promising results whereby Keen2Act outperforms several baseline models.","PeriodicalId":417607,"journal":{"name":"Proceedings of the 28th ACM Conference on User Modeling, Adaptation and Personalization","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-05-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115832046","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fabio Colella, Pedram Daee, Jussi P. P. Jokinen, Antti Oulasvirta, Samuel Kaski
A central concern in an interactive intelligent system is optimization of its actions, to be maximally helpful to its human user. In recommender systems for instance, the action is to choose what to recommend, and the optimization task is to recommend items the user prefers. The optimization is done based on earlier user's feedback (e.g. "likes" and "dislikes"), and the algorithms assume the feedback to be faithful. That is, when the user clicks "like," they actually prefer the item. We argue that this fundamental assumption can be extensively violated by human users, who are not passive feedback sources. Instead, they are in control, actively steering the system towards their goal. To verify this hypothesis, that humans steer and are able to improve performance by steering, we designed a function optimization task where a human and an optimization algorithm collaborate to find the maximum of a 1-dimensional function. At each iteration, the optimization algorithm queries the user for the value of a hidden function f at a point x, and the user, who sees the hidden function, provides an answer about f(x). Our study on 21 participants shows that users who understand how the optimization works, strategically provide biased answers (answers not equal to f(x)), which results in the algorithm finding the optimum significantly faster. Our work highlights that next-generation intelligent systems will need user models capable of helping users who steer systems to pursue their goals.
{"title":"Human Strategic Steering Improves Performance of Interactive Optimization","authors":"Fabio Colella, Pedram Daee, Jussi P. P. Jokinen, Antti Oulasvirta, Samuel Kaski","doi":"10.1145/3340631.3394883","DOIUrl":"https://doi.org/10.1145/3340631.3394883","url":null,"abstract":"A central concern in an interactive intelligent system is optimization of its actions, to be maximally helpful to its human user. In recommender systems for instance, the action is to choose what to recommend, and the optimization task is to recommend items the user prefers. The optimization is done based on earlier user's feedback (e.g. \"likes\" and \"dislikes\"), and the algorithms assume the feedback to be faithful. That is, when the user clicks \"like,\" they actually prefer the item. We argue that this fundamental assumption can be extensively violated by human users, who are not passive feedback sources. Instead, they are in control, actively steering the system towards their goal. To verify this hypothesis, that humans steer and are able to improve performance by steering, we designed a function optimization task where a human and an optimization algorithm collaborate to find the maximum of a 1-dimensional function. At each iteration, the optimization algorithm queries the user for the value of a hidden function f at a point x, and the user, who sees the hidden function, provides an answer about f(x). Our study on 21 participants shows that users who understand how the optimization works, strategically provide biased answers (answers not equal to f(x)), which results in the algorithm finding the optimum significantly faster. Our work highlights that next-generation intelligent systems will need user models capable of helping users who steer systems to pursue their goals.","PeriodicalId":417607,"journal":{"name":"Proceedings of the 28th ACM Conference on User Modeling, Adaptation and Personalization","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128851570","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Mansoury, Himan Abdollahpouri, Mykola Pechenizkiy, B. Mobasher, R. Burke
Recommender systems are often biased toward popular items. In other words, few items are frequently recommended while the majority of items do not get proportionate attention. That leads to low coverage of items in recommendation lists across users (i.e. low aggregate diversity) and unfair distribution of recommended items. In this paper, we introduce FairMatch, a general graph-based algorithm that works as a post-processing approach after recommendation generation for improving aggregate diversity. The algorithm iteratively finds items that are rarely recommended yet are high-quality and add them to the users' final recommendation lists. This is done by solving the maximum flow problem on the recommendation bipartite graph. While we focus on aggregate diversity and fair distribution of recommended items, the algorithm can be adapted to other recommendation scenarios using different underlying definitions of fairness. A comprehensive set of experiments on two datasets and comparison with state-of-the-art baselines show that FairMatch, while significantly improving aggregate diversity, provides comparable recommendation accuracy.
{"title":"FairMatch: A Graph-based Approach for Improving Aggregate Diversity in Recommender Systems","authors":"M. Mansoury, Himan Abdollahpouri, Mykola Pechenizkiy, B. Mobasher, R. Burke","doi":"10.1145/3340631.3394860","DOIUrl":"https://doi.org/10.1145/3340631.3394860","url":null,"abstract":"Recommender systems are often biased toward popular items. In other words, few items are frequently recommended while the majority of items do not get proportionate attention. That leads to low coverage of items in recommendation lists across users (i.e. low aggregate diversity) and unfair distribution of recommended items. In this paper, we introduce FairMatch, a general graph-based algorithm that works as a post-processing approach after recommendation generation for improving aggregate diversity. The algorithm iteratively finds items that are rarely recommended yet are high-quality and add them to the users' final recommendation lists. This is done by solving the maximum flow problem on the recommendation bipartite graph. While we focus on aggregate diversity and fair distribution of recommended items, the algorithm can be adapted to other recommendation scenarios using different underlying definitions of fairness. A comprehensive set of experiments on two datasets and comparison with state-of-the-art baselines show that FairMatch, while significantly improving aggregate diversity, provides comparable recommendation accuracy.","PeriodicalId":417607,"journal":{"name":"Proceedings of the 28th ACM Conference on User Modeling, Adaptation and Personalization","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-05-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132047105","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Video summaries or highlights are a compelling alternative for exploring and contextualizing unprecedented amounts of video material. However, the summarization process is commonly automatic, non-transparent and potentially biased towards particular aspects depicted in the original video. Therefore, our aim is to help users like archivists or collection managers to quickly understand which summaries are the most representative for an original video. In this paper, we present empirical results on the utility of different types of visual explanations to achieve transparency for end users on how representative video summaries are, with respect to the original video. We consider four types of video summary explanations, which use in different ways the concepts extracted from the original video subtitles and the video stream, and their prominence. The explanations are generated to meet target user preferences and express different dimensions of transparency: concept prominence, semantic coverage, distance and quantity of coverage. In two user studies we evaluate the utility of the visual explanations for achieving transparency for end users. Our results show that explanations representing all of the dimensions have the highest utility for transparency, and consequently, for understanding the representativeness of video summaries.
{"title":"Eliciting User Preferences for Personalized Explanations for Video Summaries","authors":"O. Inel, N. Tintarev, Lora Aroyo","doi":"10.1145/3340631.3394862","DOIUrl":"https://doi.org/10.1145/3340631.3394862","url":null,"abstract":"Video summaries or highlights are a compelling alternative for exploring and contextualizing unprecedented amounts of video material. However, the summarization process is commonly automatic, non-transparent and potentially biased towards particular aspects depicted in the original video. Therefore, our aim is to help users like archivists or collection managers to quickly understand which summaries are the most representative for an original video. In this paper, we present empirical results on the utility of different types of visual explanations to achieve transparency for end users on how representative video summaries are, with respect to the original video. We consider four types of video summary explanations, which use in different ways the concepts extracted from the original video subtitles and the video stream, and their prominence. The explanations are generated to meet target user preferences and express different dimensions of transparency: concept prominence, semantic coverage, distance and quantity of coverage. In two user studies we evaluate the utility of the visual explanations for achieving transparency for end users. Our results show that explanations representing all of the dimensions have the highest utility for transparency, and consequently, for understanding the representativeness of video summaries.","PeriodicalId":417607,"journal":{"name":"Proceedings of the 28th ACM Conference on User Modeling, Adaptation and Personalization","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127816060","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}