Tour itinerary planning and recommendation are challenging tasks for tourists in unfamiliar countries. Many tour recommenders only consider broad POI categories and do not align well with users’ preferences and other locational constraints. We propose an algorithm to recommend personalized tours using POI-embedding methods, which provides a finer representation of POI types. Our recommendation algorithm will generate a sequence of POIs that optimizes time and locational constraints, as well as user’s preferences based on past trajectories from similar tourists. Our tour recommendation algorithm is modelled as a word embedding model in natural language processing, coupled with an iterative algorithm for generating itineraries that satisfies time constraints. Using a Flickr dataset of 4 cities, preliminary experimental results show that our algorithm is able to recommend a relevant and accurate itinerary, based on measures of recall, precision and F1-scores.
{"title":"User Preferential Tour Recommendation Based on POI-Embedding Methods","authors":"N. Ho, Kwan Hui Lim","doi":"10.1145/3397482.3450717","DOIUrl":"https://doi.org/10.1145/3397482.3450717","url":null,"abstract":"Tour itinerary planning and recommendation are challenging tasks for tourists in unfamiliar countries. Many tour recommenders only consider broad POI categories and do not align well with users’ preferences and other locational constraints. We propose an algorithm to recommend personalized tours using POI-embedding methods, which provides a finer representation of POI types. Our recommendation algorithm will generate a sequence of POIs that optimizes time and locational constraints, as well as user’s preferences based on past trajectories from similar tourists. Our tour recommendation algorithm is modelled as a word embedding model in natural language processing, coupled with an iterative algorithm for generating itineraries that satisfies time constraints. Using a Flickr dataset of 4 cities, preliminary experimental results show that our algorithm is able to recommend a relevant and accurate itinerary, based on measures of recall, precision and F1-scores.","PeriodicalId":216190,"journal":{"name":"26th International Conference on Intelligent User Interfaces - Companion","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124843851","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The COVID-19 pandemic has created widespread health and economical impacts, affecting millions around the world. To better understand these impacts, we present the TweetCOVID system that offers the capability to understand the public reactions to the COVID-19 pandemic in terms of their sentiments, emotions, topics of interest and controversial discussions, over a range of time periods and locations, using public tweets. We also present three example use cases that illustrates the usefulness of our proposed TweetCOVID system.
{"title":"TweetCOVID: A System for Analyzing Public Sentiments and Discussions about COVID-19 via Twitter Activities","authors":"Jolin Shaynn-Ly Kwan, Kwan Hui Lim","doi":"10.1145/3397482.3450733","DOIUrl":"https://doi.org/10.1145/3397482.3450733","url":null,"abstract":"The COVID-19 pandemic has created widespread health and economical impacts, affecting millions around the world. To better understand these impacts, we present the TweetCOVID system that offers the capability to understand the public reactions to the COVID-19 pandemic in terms of their sentiments, emotions, topics of interest and controversial discussions, over a range of time periods and locations, using public tweets. We also present three example use cases that illustrates the usefulness of our proposed TweetCOVID system.","PeriodicalId":216190,"journal":{"name":"26th International Conference on Intelligent User Interfaces - Companion","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-03-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115721581","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Personalized adaptation technology has been adopted in a wide range of digital applications such as health, training and education, e-commerce and entertainment. Personalization systems typically build a user model, aiming to characterize the user at hand, and then use this model to personalize the interaction. Personalization and user modeling, however, are often intrinsically at odds with each other (a fact some times referred to as the personalization paradox). In this paper, we take a closer look at this personalization paradox, and identify two ways in which it might manifest: feedback loops and moving targets. To illustrate these issues, we report results in the domain of personalized exergames (videogames for physical exercise), and describe our early steps to address some of the issues arisen by the personalization paradox.
{"title":"The Personalization Paradox: the Conflict between Accurate User Models and Personalized Adaptive Systems","authors":"Santiago Ontan'on, Jichen Zhu","doi":"10.1145/3397482.3450734","DOIUrl":"https://doi.org/10.1145/3397482.3450734","url":null,"abstract":"Personalized adaptation technology has been adopted in a wide range of digital applications such as health, training and education, e-commerce and entertainment. Personalization systems typically build a user model, aiming to characterize the user at hand, and then use this model to personalize the interaction. Personalization and user modeling, however, are often intrinsically at odds with each other (a fact some times referred to as the personalization paradox). In this paper, we take a closer look at this personalization paradox, and identify two ways in which it might manifest: feedback loops and moving targets. To illustrate these issues, we report results in the domain of personalized exergames (videogames for physical exercise), and describe our early steps to address some of the issues arisen by the personalization paradox.","PeriodicalId":216190,"journal":{"name":"26th International Conference on Intelligent User Interfaces - Companion","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-03-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128350621","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Table extraction from PDF and image documents is a ubiquitous task in the real-world. Perfect extraction quality is difficult to achieve with one single out-of-box model due to (1) the wide variety of table styles, (2) the lack of training data representing this variety and (3) the inherent ambiguity and subjectivity of table definitions between end-users. Meanwhile, building customized models from scratch can be difficult due to the expensive nature of annotating table data. We attempt to solve these challenges with TableLab by providing a system where users and models seamlessly work together to quickly customize high-quality extraction models with a few labelled examples for the user’s document collection, which contains pages with tables. Given an input document collection, TableLab first detects tables with similar structures (templates) by clustering embeddings from the extraction model. Document collections often contain tables created with a limited set of templates or similar structures. It then selects a few representative table examples already extracted with a pre-trained base deep learning model. Via an easy-to-use user interface, users provide feedback to these selections without necessarily having to identify every single error. TableLab then applies such feedback to finetune the pre-trained model and returns the results of the finetuned model back to the user. The user can choose to repeat this process iteratively until obtaining a customized model with satisfactory performance.
{"title":"TableLab: An Interactive Table Extraction System with Adaptive Deep Learning","authors":"N. Wang, D. Burdick, Yunyao Li","doi":"10.1145/3397482.3450718","DOIUrl":"https://doi.org/10.1145/3397482.3450718","url":null,"abstract":"Table extraction from PDF and image documents is a ubiquitous task in the real-world. Perfect extraction quality is difficult to achieve with one single out-of-box model due to (1) the wide variety of table styles, (2) the lack of training data representing this variety and (3) the inherent ambiguity and subjectivity of table definitions between end-users. Meanwhile, building customized models from scratch can be difficult due to the expensive nature of annotating table data. We attempt to solve these challenges with TableLab by providing a system where users and models seamlessly work together to quickly customize high-quality extraction models with a few labelled examples for the user’s document collection, which contains pages with tables. Given an input document collection, TableLab first detects tables with similar structures (templates) by clustering embeddings from the extraction model. Document collections often contain tables created with a limited set of templates or similar structures. It then selects a few representative table examples already extracted with a pre-trained base deep learning model. Via an easy-to-use user interface, users provide feedback to these selections without necessarily having to identify every single error. TableLab then applies such feedback to finetune the pre-trained model and returns the results of the finetuned model back to the user. The user can choose to repeat this process iteratively until obtaining a customized model with satisfactory performance.","PeriodicalId":216190,"journal":{"name":"26th International Conference on Intelligent User Interfaces - Companion","volume":"216 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-02-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114851224","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
V. Mehta, Parul Gupta, Ramanathan Subramanian, Abhinav Dhall
This paper proposes FakeBuster, a novel DeepFake detector for (a) detecting impostors during video conferencing, and (b) manipulated faces on social media. FakeBuster is a standalone deep learning- based solution, which enables a user to detect if another person’s video is manipulated or spoofed during a video conference-based meeting. This tool is independent of video conferencing solutions and has been tested with Zoom and Skype applications. It employs a 3D convolutional neural network for predicting video fakeness. The network is trained on a combination of datasets such as Deeperforensics, DFDC, VoxCeleb, and deepfake videos created using locally captured images (specific to video conferencing scenarios). Diversity in the training data makes FakeBuster robust to multiple environments and facial manipulations, thereby making it generalizable and ecologically valid.
{"title":"FakeBuster: A DeepFakes Detection Tool for Video Conferencing Scenarios","authors":"V. Mehta, Parul Gupta, Ramanathan Subramanian, Abhinav Dhall","doi":"10.1145/3397482.3450726","DOIUrl":"https://doi.org/10.1145/3397482.3450726","url":null,"abstract":"This paper proposes FakeBuster, a novel DeepFake detector for (a) detecting impostors during video conferencing, and (b) manipulated faces on social media. FakeBuster is a standalone deep learning- based solution, which enables a user to detect if another person’s video is manipulated or spoofed during a video conference-based meeting. This tool is independent of video conferencing solutions and has been tested with Zoom and Skype applications. It employs a 3D convolutional neural network for predicting video fakeness. The network is trained on a combination of datasets such as Deeperforensics, DFDC, VoxCeleb, and deepfake videos created using locally captured images (specific to video conferencing scenarios). Diversity in the training data makes FakeBuster robust to multiple environments and facial manipulations, thereby making it generalizable and ecologically valid.","PeriodicalId":216190,"journal":{"name":"26th International Conference on Intelligent User Interfaces - Companion","volume":"659 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116182814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}