Pub Date : 2020-03-01DOI: 10.1142/s1793351x20500014
Salah Rabba, M. Kyan, Lei Gao, A. Quddus, A. S. Zandi, L. Guan
There remain outstanding challenges for improving accuracy of multi-feature information for head-pose and gaze estimation. The proposed framework employs discriminative analysis for head-pose and gaze estimation using kernel discriminative multiple canonical correlation analysis (K-DMCCA). The feature extraction component of the framework includes spatial indexing, statistical and geometrical elements. Head-pose and gaze estimation is constructed by feature aggregation and transforming features into a higher dimensional space using K-DMCCA for accurate estimation. The two main contributions are: Enhancing fusion performance through the use of kernel-based DMCCA, and by introducing an improved iris region descriptor based on quadtree. The overall approach is also inclusive of statistical and geometrical indexing that are calibration free (does not require any subsequent adjustment). We validate the robustness of the proposed framework across a wide variety of datasets, which consist of different modalities (RGB and Depth), constraints (wide range of head-poses, not only frontal), quality (accurately labelled for validation), occlusion (due to glasses, hair bang, facial hair) and illumination. Our method achieved an accurate head-pose and gaze estimation of 4.8∘ using Cave, 4.6∘ using MPII, 5.1∘ using ACS, 5.9∘ using EYEDIAP, 4.3∘ using OSLO and 4.6∘ using UULM datasets.
{"title":"Discriminative Robust Head-Pose and Gaze Estimation Using Kernel-DMCCA Features Fusion","authors":"Salah Rabba, M. Kyan, Lei Gao, A. Quddus, A. S. Zandi, L. Guan","doi":"10.1142/s1793351x20500014","DOIUrl":"https://doi.org/10.1142/s1793351x20500014","url":null,"abstract":"There remain outstanding challenges for improving accuracy of multi-feature information for head-pose and gaze estimation. The proposed framework employs discriminative analysis for head-pose and gaze estimation using kernel discriminative multiple canonical correlation analysis (K-DMCCA). The feature extraction component of the framework includes spatial indexing, statistical and geometrical elements. Head-pose and gaze estimation is constructed by feature aggregation and transforming features into a higher dimensional space using K-DMCCA for accurate estimation. The two main contributions are: Enhancing fusion performance through the use of kernel-based DMCCA, and by introducing an improved iris region descriptor based on quadtree. The overall approach is also inclusive of statistical and geometrical indexing that are calibration free (does not require any subsequent adjustment). We validate the robustness of the proposed framework across a wide variety of datasets, which consist of different modalities (RGB and Depth), constraints (wide range of head-poses, not only frontal), quality (accurately labelled for validation), occlusion (due to glasses, hair bang, facial hair) and illumination. Our method achieved an accurate head-pose and gaze estimation of 4.8∘ using Cave, 4.6∘ using MPII, 5.1∘ using ACS, 5.9∘ using EYEDIAP, 4.3∘ using OSLO and 4.6∘ using UULM datasets.","PeriodicalId":217956,"journal":{"name":"Int. J. Semantic Comput.","volume":"85 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126505611","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-03-01DOI: 10.1142/s1793351x20400036
M. Pai, Vaibhav Mehrotra, Ujjwal Verma, R. Pai
The availability of computationally efficient and powerful Deep Learning frameworks and high-resolution satellite imagery has created new approach for developing complex applications in the field of remote sensing. The easy access to abundant image data repository made available by different satellites of space agencies such as Copernicus, Landsat, etc. has opened various avenues of research in monitoring the world’s oceans, land, rivers, etc. The challenging research problem in this direction is the accurate identification and subsequent segmentation of surface water in images in the microwave spectrum. In the recent years, deep learning methods for semantic segmentation are the preferred choice given its high accuracy and ease of use. One major bottleneck in semantic segmentation pipelines is the manual annotation of data. This paper proposes Generative Adversarial Networks (GANs) on the training data (images and their corresponding labels) to create an enhanced dataset on which the networks can be trained, therefore, reducing human effort of manual labeling. Further, the research also proposes the use of deep-learning approaches such as U-Net and FCN-8 to perform an efficient segmentation of auto annotated, enhanced data of water body and land. The experimental results show that the U-Net model without GAN achieves superior performance on SAR images with pixel accuracy of 0.98 and F1 score of 0.9923. However, when augmented with GANs, the results saw a rise in these metrics with PA of 0.99 and F1 score of 0.9954.
{"title":"Improved Semantic Segmentation of Water Bodies and Land in SAR Images Using Generative Adversarial Networks","authors":"M. Pai, Vaibhav Mehrotra, Ujjwal Verma, R. Pai","doi":"10.1142/s1793351x20400036","DOIUrl":"https://doi.org/10.1142/s1793351x20400036","url":null,"abstract":"The availability of computationally efficient and powerful Deep Learning frameworks and high-resolution satellite imagery has created new approach for developing complex applications in the field of remote sensing. The easy access to abundant image data repository made available by different satellites of space agencies such as Copernicus, Landsat, etc. has opened various avenues of research in monitoring the world’s oceans, land, rivers, etc. The challenging research problem in this direction is the accurate identification and subsequent segmentation of surface water in images in the microwave spectrum. In the recent years, deep learning methods for semantic segmentation are the preferred choice given its high accuracy and ease of use. One major bottleneck in semantic segmentation pipelines is the manual annotation of data. This paper proposes Generative Adversarial Networks (GANs) on the training data (images and their corresponding labels) to create an enhanced dataset on which the networks can be trained, therefore, reducing human effort of manual labeling. Further, the research also proposes the use of deep-learning approaches such as U-Net and FCN-8 to perform an efficient segmentation of auto annotated, enhanced data of water body and land. The experimental results show that the U-Net model without GAN achieves superior performance on SAR images with pixel accuracy of 0.98 and F1 score of 0.9923. However, when augmented with GANs, the results saw a rise in these metrics with PA of 0.99 and F1 score of 0.9954.","PeriodicalId":217956,"journal":{"name":"Int. J. Semantic Comput.","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134281043","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-12-26DOI: 10.1142/s1793351x1940021x
Navid Panchi, Khush Agrawal, Unmesh Patil, Aniket Gujarathi, Aman Jain, Harsha Namdeo, S. Chiddarwar
Mobile robots are widely used in the surveillance industry, for military and industrial applications. In order to carry out surveillance tasks like urban search and rescue operation, the ability to...
移动机器人广泛应用于监控行业,用于军事和工业应用。为了执行监视任务,如城市搜救行动,能力…
{"title":"Deep Learning-Based Stair Segmentation and Behavioral Cloning for Autonomous Stair Climbing","authors":"Navid Panchi, Khush Agrawal, Unmesh Patil, Aniket Gujarathi, Aman Jain, Harsha Namdeo, S. Chiddarwar","doi":"10.1142/s1793351x1940021x","DOIUrl":"https://doi.org/10.1142/s1793351x1940021x","url":null,"abstract":"Mobile robots are widely used in the surveillance industry, for military and industrial applications. In order to carry out surveillance tasks like urban search and rescue operation, the ability to...","PeriodicalId":217956,"journal":{"name":"Int. J. Semantic Comput.","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132096820","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-12-26DOI: 10.1142/s1793351x1940018x
Bo Fu, B. Steichen, Wenlu Zhang
Ontology visualization plays an important role in human data interaction by offering clarity and insight for complex structured datasets. Recent usability studies of ontology visualization techniqu...
{"title":"Towards Adaptive Ontology Visualization - Predicting User Success from Behavioral Data","authors":"Bo Fu, B. Steichen, Wenlu Zhang","doi":"10.1142/s1793351x1940018x","DOIUrl":"https://doi.org/10.1142/s1793351x1940018x","url":null,"abstract":"Ontology visualization plays an important role in human data interaction by offering clarity and insight for complex structured datasets. Recent usability studies of ontology visualization techniqu...","PeriodicalId":217956,"journal":{"name":"Int. J. Semantic Comput.","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125441308","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-12-26DOI: 10.1142/s1793351x19400233
A. Demir, Volkan Sezer
In this study, a unified motion planner with low level controller for continuous control of a differential drive mobile robot under variable payload values is presented. The deep reinforcement agen...
{"title":"Motion Planning and Control with Randomized Payloads on Real Robot Using Deep Reinforcement Learning","authors":"A. Demir, Volkan Sezer","doi":"10.1142/s1793351x19400233","DOIUrl":"https://doi.org/10.1142/s1793351x19400233","url":null,"abstract":"In this study, a unified motion planner with low level controller for continuous control of a differential drive mobile robot under variable payload values is presented. The deep reinforcement agen...","PeriodicalId":217956,"journal":{"name":"Int. J. Semantic Comput.","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117085029","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-12-01DOI: 10.1142/s1793351x19400221
Dinh-Khanh Ho, K. B. Chehida, Benoît Miramond, M. Auguin
Mobile robotic systems are normally confronted with the shortage of on-board resources such as computing capabilities and energy, as well as significantly influenced by the dynamics of surrounding environmental conditions. This context requires adaptive decisions at run-time that react to the dynamic and uncertain operational circumstances for guaranteeing the performance requirements while respecting the other constraints. In this paper, we propose a reinforcement learning (RL)-based approach for Quality of Service QoS and energy-aware autonomous robotic mission manager. The mobile robotic mission manager leverages the idea of (RL) by monitoring actively the state of performance and energy consumption of the mission and then selecting the best mapping parameter configuration by evaluating an accumulative reward feedback balancing between QoS and energy. As a case study, we apply this methodology to an autonomous navigation mission. Our simulation results demonstrate the efficiency of the proposed management framework and provide a promising solution for the real mobile robotic systems.
{"title":"Learning-Based Adaptive Management of QoS and Energy for Mobile Robotic Missions","authors":"Dinh-Khanh Ho, K. B. Chehida, Benoît Miramond, M. Auguin","doi":"10.1142/s1793351x19400221","DOIUrl":"https://doi.org/10.1142/s1793351x19400221","url":null,"abstract":"Mobile robotic systems are normally confronted with the shortage of on-board resources such as computing capabilities and energy, as well as significantly influenced by the dynamics of surrounding environmental conditions. This context requires adaptive decisions at run-time that react to the dynamic and uncertain operational circumstances for guaranteeing the performance requirements while respecting the other constraints. In this paper, we propose a reinforcement learning (RL)-based approach for Quality of Service QoS and energy-aware autonomous robotic mission manager. The mobile robotic mission manager leverages the idea of (RL) by monitoring actively the state of performance and energy consumption of the mission and then selecting the best mapping parameter configuration by evaluating an accumulative reward feedback balancing between QoS and energy. As a case study, we apply this methodology to an autonomous navigation mission. Our simulation results demonstrate the efficiency of the proposed management framework and provide a promising solution for the real mobile robotic systems.","PeriodicalId":217956,"journal":{"name":"Int. J. Semantic Comput.","volume":"225 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124480237","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-12-01DOI: 10.1142/s1793351x19400208
Nhat X. T. Le, Ryan Rivas, James M. Flegal, Vagelis Hristidis
Customer reviews are an essential resource to reduce an online product’s uncertainty, which has been shown to be a critical factor for its purchase decision. Existing e-commerce platforms typically ask users to write free-form text reviews, which are sometimes augmented by a small set of predefined questions, e.g. “rate the product description’s accuracy from 1 to 5.” In this paper, we argue that this “passive” style of review solicitation is suboptimal in achieving low-uncertainty “review profiles” for products. Its key drawback is that some product aspects receive a very large number of reviews while other aspects do not have enough reviews to draw confident conclusions. Therefore, we hypothesize that we can achieve lower-uncertainty review profiles by carefully selecting which aspects users are asked to rate. To test this hypothesis, we propose various techniques to dynamically select which aspects to ask users to rate given the current review profile of a product. We use Bayesian inference principles to define reasonable review profile uncertainty measures; specifically, via an aspect’s rating variance. We compare our proposed aspect selection techniques to several baselines on several review profile uncertainty measures. Experimental results on two real-world datasets show that our methods lead to better review profile uncertainty compared to aspect selection baselines and traditional passive review solicitations. Moreover, we present and evaluate a hybrid solicitation method that combines the advantages of both active and passive review solicitations.
{"title":"Decrease Product Rating Uncertainty Through Focused Reviews Solicitation","authors":"Nhat X. T. Le, Ryan Rivas, James M. Flegal, Vagelis Hristidis","doi":"10.1142/s1793351x19400208","DOIUrl":"https://doi.org/10.1142/s1793351x19400208","url":null,"abstract":"Customer reviews are an essential resource to reduce an online product’s uncertainty, which has been shown to be a critical factor for its purchase decision. Existing e-commerce platforms typically ask users to write free-form text reviews, which are sometimes augmented by a small set of predefined questions, e.g. “rate the product description’s accuracy from 1 to 5.” In this paper, we argue that this “passive” style of review solicitation is suboptimal in achieving low-uncertainty “review profiles” for products. Its key drawback is that some product aspects receive a very large number of reviews while other aspects do not have enough reviews to draw confident conclusions. Therefore, we hypothesize that we can achieve lower-uncertainty review profiles by carefully selecting which aspects users are asked to rate. To test this hypothesis, we propose various techniques to dynamically select which aspects to ask users to rate given the current review profile of a product. We use Bayesian inference principles to define reasonable review profile uncertainty measures; specifically, via an aspect’s rating variance. We compare our proposed aspect selection techniques to several baselines on several review profile uncertainty measures. Experimental results on two real-world datasets show that our methods lead to better review profile uncertainty compared to aspect selection baselines and traditional passive review solicitations. Moreover, we present and evaluate a hybrid solicitation method that combines the advantages of both active and passive review solicitations.","PeriodicalId":217956,"journal":{"name":"Int. J. Semantic Comput.","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117036283","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-12-01DOI: 10.1142/s1793351x19400191
Hongzhao Guan, Alexander Lerch
Voice disorder is a frequently encountered health issue. Many people, however, either cannot afford to visit a professional doctor or neglect to take good care of their voice. In order to give a patient a preliminary diagnosis without using professional medical devices, previous research has shown that the detection of voice disorders can be carried out by utilizing machine learning and acoustic features extracted from voice recordings. Considering the increasing popularity of deep learning, feature learning and transfer learning, this study explores the possibilities of using these methods to assign voice recordings into one of two classes—Normal and Pathological. While the results show the general viability of deep learning and feature learning for the automatic recognition of voice disorders, they also lead to discussions on how to choose a pre-trained model when using transfer learning for this task. Furthermore, the results demonstrate the shortcomings of the existing datasets for voice disorder detection such as insufficient dataset size and lack of generality.
{"title":"Evaluation of Feature Learning Methods for Voice Disorder Detection","authors":"Hongzhao Guan, Alexander Lerch","doi":"10.1142/s1793351x19400191","DOIUrl":"https://doi.org/10.1142/s1793351x19400191","url":null,"abstract":"Voice disorder is a frequently encountered health issue. Many people, however, either cannot afford to visit a professional doctor or neglect to take good care of their voice. In order to give a patient a preliminary diagnosis without using professional medical devices, previous research has shown that the detection of voice disorders can be carried out by utilizing machine learning and acoustic features extracted from voice recordings. Considering the increasing popularity of deep learning, feature learning and transfer learning, this study explores the possibilities of using these methods to assign voice recordings into one of two classes—Normal and Pathological. While the results show the general viability of deep learning and feature learning for the automatic recognition of voice disorders, they also lead to discussions on how to choose a pre-trained model when using transfer learning for this task. Furthermore, the results demonstrate the shortcomings of the existing datasets for voice disorder detection such as insufficient dataset size and lack of generality.","PeriodicalId":217956,"journal":{"name":"Int. J. Semantic Comput.","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122935004","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-09-01DOI: 10.1142/s1793351x19500028
Rafael Rubiati Scalvenzi, R. Guido, N. Marranghello
An abstract interpretation is usually required to analyze acoustic compositions. Nevertheless, there is much signal processing-related research focusing on music processing and similar topics. In that context, the semantic information contained in the melody involving major and minor chords, sharps and flats associated with semibreve, minim, crotchet, quaver, semiquaver and demisemiquaver notes can help in the study of musical sounds. Thus, multiresolution analysis based on discrete wavelet-packet transform (DWPT) associated with a support vector machine (SVM) is used in this paper to inspect and classify those signals, correlating them with a respective acoustic pattern. Results over hundreds of inputs provided almost full accuracy, reassuring the efficacy of the proposed approach for both off-line and real-time usage.
{"title":"Wavelet-packets Associated with Support Vector Machine Are Effective for Monophone Sorting in Music Signals","authors":"Rafael Rubiati Scalvenzi, R. Guido, N. Marranghello","doi":"10.1142/s1793351x19500028","DOIUrl":"https://doi.org/10.1142/s1793351x19500028","url":null,"abstract":"An abstract interpretation is usually required to analyze acoustic compositions. Nevertheless, there is much signal processing-related research focusing on music processing and similar topics. In that context, the semantic information contained in the melody involving major and minor chords, sharps and flats associated with semibreve, minim, crotchet, quaver, semiquaver and demisemiquaver notes can help in the study of musical sounds. Thus, multiresolution analysis based on discrete wavelet-packet transform (DWPT) associated with a support vector machine (SVM) is used in this paper to inspect and classify those signals, correlating them with a respective acoustic pattern. Results over hundreds of inputs provided almost full accuracy, reassuring the efficacy of the proposed approach for both off-line and real-time usage.","PeriodicalId":217956,"journal":{"name":"Int. J. Semantic Comput.","volume":"23 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128396817","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-09-01DOI: 10.1142/s1793351x19400129
Brandon Huynh, Adam Ibrahim, YunSuk Chang, Tobias Höllerer, J. O'Donovan
Augmented reality (AR) interfaces increasingly utilize artificial intelligence systems to tailor content and experiences to the user. We explore the effects of one such system — a recommender system for online shopping — which allows customers to view personalized product recommendations in the physical spaces where they might be used. We describe results of a [Formula: see text] condition exploratory study in which recommendation quality was varied across three user interface types. Our results highlight potential differences in user perception of the recommended objects in an AR environment. Specifically, users rate product recommendations significantly higher in AR and in a 3D browser interface, and show a significant increase in trust in the recommender system, compared to a web interface with 2D product images. Through semi-structured interviews, we gather participant feedback which suggests AR interfaces perform better due to their ability to view products within the physical context where they will be used.
{"title":"User Perception of Situated Product Recommendations in Augmented Reality","authors":"Brandon Huynh, Adam Ibrahim, YunSuk Chang, Tobias Höllerer, J. O'Donovan","doi":"10.1142/s1793351x19400129","DOIUrl":"https://doi.org/10.1142/s1793351x19400129","url":null,"abstract":"Augmented reality (AR) interfaces increasingly utilize artificial intelligence systems to tailor content and experiences to the user. We explore the effects of one such system — a recommender system for online shopping — which allows customers to view personalized product recommendations in the physical spaces where they might be used. We describe results of a [Formula: see text] condition exploratory study in which recommendation quality was varied across three user interface types. Our results highlight potential differences in user perception of the recommended objects in an AR environment. Specifically, users rate product recommendations significantly higher in AR and in a 3D browser interface, and show a significant increase in trust in the recommender system, compared to a web interface with 2D product images. Through semi-structured interviews, we gather participant feedback which suggests AR interfaces perform better due to their ability to view products within the physical context where they will be used.","PeriodicalId":217956,"journal":{"name":"Int. J. Semantic Comput.","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130120207","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}