A prediction-explanation framework is proposed to identify when and what behaviors are involved in forming interlocutors’ impressions in group discussions. We targeted the self-reported scores of 16 impressions, including enjoyment and concentration. To that end, we formulate the problem as discovering behavioral features that contributed to the impression prediction and determining the timings that the behaviors frequently occurred. To solve this problem, this paper proposes a two-fold framework consisting of the prediction part followed by the explanation part. The former prediction part employs random forest regressors using functional head-movement features and BERT-based linguistic features, which can capture various aspects of interactive conversational behaviors. The later part measures the levels of features’ contribution to the prediction using a SHAP analysis and introduces a novel idea of temporal decomposition of features’ contributions over time. The influential behaviors and their timings are identified from local maximums of the temporal distribution of features’ contributions. Targeting 17-group 4-female discussions, the predictability and explainability of the proposed framework are confirmed by some case studies and quantitative evaluations of the detected timings.
{"title":"Identifying Interlocutors' Behaviors and its Timings Involved with Impression Formation from Head-Movement Features and Linguistic Features","authors":"Shumpei Otsuchi, Koya Ito, Yoko Ishii, Ryo Ishii, Shinichirou Eitoku, Kazuhiro Otsuka","doi":"10.1145/3577190.3614124","DOIUrl":"https://doi.org/10.1145/3577190.3614124","url":null,"abstract":"A prediction-explanation framework is proposed to identify when and what behaviors are involved in forming interlocutors’ impressions in group discussions. We targeted the self-reported scores of 16 impressions, including enjoyment and concentration. To that end, we formulate the problem as discovering behavioral features that contributed to the impression prediction and determining the timings that the behaviors frequently occurred. To solve this problem, this paper proposes a two-fold framework consisting of the prediction part followed by the explanation part. The former prediction part employs random forest regressors using functional head-movement features and BERT-based linguistic features, which can capture various aspects of interactive conversational behaviors. The later part measures the levels of features’ contribution to the prediction using a SHAP analysis and introduces a novel idea of temporal decomposition of features’ contributions over time. The influential behaviors and their timings are identified from local maximums of the temporal distribution of features’ contributions. Targeting 17-group 4-female discussions, the predictability and explainability of the proposed framework are confirmed by some case studies and quantitative evaluations of the detected timings.","PeriodicalId":93171,"journal":{"name":"Companion Publication of the 2020 International Conference on Multimodal Interaction","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135044544","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a novel approach to mitigate bias in facial expression recognition (FER) models. Our method aims to reduce sensitive attribute information such as gender, age, or race, in the embeddings produced by FER models. We employ a kernel mean shrinkage estimator to estimate the kernel mean of the distributions of the embeddings associated with different sensitive attribute groups, such as young and old, in the Hilbert space. Using this estimation, we calculate the maximum mean discrepancy (MMD) distance between the distributions and incorporate it in the classifier loss along with an adversarial loss, which is then minimized through the learning process to improve the distribution alignment. Our method makes sensitive attributes less recognizable for the model, which in turn promotes fairness. Additionally, for the first time, we analyze the notion of attractiveness as an important sensitive attribute in FER models and demonstrate that FER models can indeed exhibit biases towards more attractive faces. To prove the efficacy of our model in reducing bias regarding different sensitive attributes (including the newly proposed attractiveness attribute), we perform several experiments on two widely used datasets, CelebA and RAF-DB. The results in terms of both accuracy and fairness measures outperform the state-of-the-art in most cases, demonstrating the effectiveness of the proposed method.
{"title":"Toward Fair Facial Expression Recognition with Improved Distribution Alignment","authors":"Mojtaba Kolahdouzi, Ali Etemad","doi":"10.1145/3577190.3614141","DOIUrl":"https://doi.org/10.1145/3577190.3614141","url":null,"abstract":"We present a novel approach to mitigate bias in facial expression recognition (FER) models. Our method aims to reduce sensitive attribute information such as gender, age, or race, in the embeddings produced by FER models. We employ a kernel mean shrinkage estimator to estimate the kernel mean of the distributions of the embeddings associated with different sensitive attribute groups, such as young and old, in the Hilbert space. Using this estimation, we calculate the maximum mean discrepancy (MMD) distance between the distributions and incorporate it in the classifier loss along with an adversarial loss, which is then minimized through the learning process to improve the distribution alignment. Our method makes sensitive attributes less recognizable for the model, which in turn promotes fairness. Additionally, for the first time, we analyze the notion of attractiveness as an important sensitive attribute in FER models and demonstrate that FER models can indeed exhibit biases towards more attractive faces. To prove the efficacy of our model in reducing bias regarding different sensitive attributes (including the newly proposed attractiveness attribute), we perform several experiments on two widely used datasets, CelebA and RAF-DB. The results in terms of both accuracy and fairness measures outperform the state-of-the-art in most cases, demonstrating the effectiveness of the proposed method.","PeriodicalId":93171,"journal":{"name":"Companion Publication of the 2020 International Conference on Multimodal Interaction","volume":"274 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135044656","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bernardo Marques, Samuel Silva, Rafael Maio, João Alves, Carlos Ferreira, Paulo Dias, Beatriz Sousa Santos
Over time, numerous multimodal eXtended Reality (XR) user studies have been conducted in laboratory environments, with participants fulfilling tasks under the guidance of a researcher. Although generalizable results contributed to increase the maturity of the field, it is also paramount to address the ecological validity of evaluations outside the laboratory. Despite real-world scenarios being clearly challenging, successful in-situ and remote deployment has become realistic to address a broad variety of research questions, thus, expanding participants’ sample to more specific target users, considering multi-modal constraints not reflected in controlled laboratory settings and other benefits. In this paper, a set of multimodal XR experiments conducted outside the laboratory are described (e.g., industrial field studies, remote collaborative tasks, longitudinal rehabilitation exercises). Then, a list of lessons learned is reported, illustrating challenges, and opportunities, aiming to increase the level of awareness of the research community and facilitate performing further evaluations.
{"title":"Evaluating Outside the Box: Lessons Learned on eXtended Reality Multi-modal Experiments Beyond the Laboratory","authors":"Bernardo Marques, Samuel Silva, Rafael Maio, João Alves, Carlos Ferreira, Paulo Dias, Beatriz Sousa Santos","doi":"10.1145/3577190.3614134","DOIUrl":"https://doi.org/10.1145/3577190.3614134","url":null,"abstract":"Over time, numerous multimodal eXtended Reality (XR) user studies have been conducted in laboratory environments, with participants fulfilling tasks under the guidance of a researcher. Although generalizable results contributed to increase the maturity of the field, it is also paramount to address the ecological validity of evaluations outside the laboratory. Despite real-world scenarios being clearly challenging, successful in-situ and remote deployment has become realistic to address a broad variety of research questions, thus, expanding participants’ sample to more specific target users, considering multi-modal constraints not reflected in controlled laboratory settings and other benefits. In this paper, a set of multimodal XR experiments conducted outside the laboratory are described (e.g., industrial field studies, remote collaborative tasks, longitudinal rehabilitation exercises). Then, a list of lessons learned is reported, illustrating challenges, and opportunities, aiming to increase the level of awareness of the research community and facilitate performing further evaluations.","PeriodicalId":93171,"journal":{"name":"Companion Publication of the 2020 International Conference on Multimodal Interaction","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135044908","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents a computational study to analyze and predict turns (i.e., turn-taking and turn-keeping) in multiparty conversations. Specifically, we use a high-fidelity hybrid data acquisition system to capture a large-scale set of multi-modal natural conversational behaviors of interlocutors in three-party conversations, including gazes, head movements, body movements, speech, etc. Based on the inter-pausal units (IPUs) extracted from the in-house acquired dataset, we propose a transformer-based computational model to predict the turns based on the interlocutor states (speaking/back-channeling/silence) and the gaze targets. Our model can robustly achieve more than 80% accuracy, and the generalizability of our model was extensively validated through cross-group experiments. Also, we introduce a novel computational metric called “relative engagement level" (REL) of IPUs, and further validate its statistical significance between turn-keeping IPUs and turn-taking IPUs, and between different conversational groups. Our experimental results also found that the patterns of the interlocutor states can be used as a more effective cue than their gaze behaviors for predicting turns in multiparty conversations.
{"title":"Multimodal Turn Analysis and Prediction for Multi-party Conversations","authors":"Meng-Chen Lee, Mai Trinh, Zhigang Deng","doi":"10.1145/3577190.3614139","DOIUrl":"https://doi.org/10.1145/3577190.3614139","url":null,"abstract":"This paper presents a computational study to analyze and predict turns (i.e., turn-taking and turn-keeping) in multiparty conversations. Specifically, we use a high-fidelity hybrid data acquisition system to capture a large-scale set of multi-modal natural conversational behaviors of interlocutors in three-party conversations, including gazes, head movements, body movements, speech, etc. Based on the inter-pausal units (IPUs) extracted from the in-house acquired dataset, we propose a transformer-based computational model to predict the turns based on the interlocutor states (speaking/back-channeling/silence) and the gaze targets. Our model can robustly achieve more than 80% accuracy, and the generalizability of our model was extensively validated through cross-group experiments. Also, we introduce a novel computational metric called “relative engagement level\" (REL) of IPUs, and further validate its statistical significance between turn-keeping IPUs and turn-taking IPUs, and between different conversational groups. Our experimental results also found that the patterns of the interlocutor states can be used as a more effective cue than their gaze behaviors for predicting turns in multiparty conversations.","PeriodicalId":93171,"journal":{"name":"Companion Publication of the 2020 International Conference on Multimodal Interaction","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135044916","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As AI becomes ubiquitous, its physical embodiment—robots–will also gradually enter our lives. As they do, we will demand that they understand us, predict our needs and wants, and adapt to us as we change our moods and minds, learn, grow, and age. The nexus created by recent major advances in machine learning for machine perception, navigation, and natural language processing has enabled human-robot interaction in real-world contexts, just as the need for human services continues to grow, from elder care to nursing to education and training. This talk will discuss our research in socially assistive robotics (SAR), which uses embodied social interaction to support user goals in health, wellness, training, and education. SAR brings together machine learning for user modeling, multimodal behavioral signal processing, and affective computing to enable robots to understand, interact, and adapt to users’ specific and ever-changing needs. The talk will cover methods and challenges of using multi-modal interaction data and expressive robot behavior to monitor, coach, motivate, and support a wide variety of user populations and use cases. We will cover insights from work with users across the age span (infants, children, adults, elderly), ability span (typically developing, autism, stroke, Alzheimer’s), contexts (schools, therapy centers, homes), and deployment durations (up to 6 months), as well as commercial implications.
{"title":"A Robot Just for You: Multimodal Personalized Human-Robot Interaction and the Future of Work and Care","authors":"Maja Mataric","doi":"10.1145/3577190.3616524","DOIUrl":"https://doi.org/10.1145/3577190.3616524","url":null,"abstract":"As AI becomes ubiquitous, its physical embodiment—robots–will also gradually enter our lives. As they do, we will demand that they understand us, predict our needs and wants, and adapt to us as we change our moods and minds, learn, grow, and age. The nexus created by recent major advances in machine learning for machine perception, navigation, and natural language processing has enabled human-robot interaction in real-world contexts, just as the need for human services continues to grow, from elder care to nursing to education and training. This talk will discuss our research in socially assistive robotics (SAR), which uses embodied social interaction to support user goals in health, wellness, training, and education. SAR brings together machine learning for user modeling, multimodal behavioral signal processing, and affective computing to enable robots to understand, interact, and adapt to users’ specific and ever-changing needs. The talk will cover methods and challenges of using multi-modal interaction data and expressive robot behavior to monitor, coach, motivate, and support a wide variety of user populations and use cases. We will cover insights from work with users across the age span (infants, children, adults, elderly), ability span (typically developing, autism, stroke, Alzheimer’s), contexts (schools, therapy centers, homes), and deployment durations (up to 6 months), as well as commercial implications.","PeriodicalId":93171,"journal":{"name":"Companion Publication of the 2020 International Conference on Multimodal Interaction","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135045703","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Predicting the future trajectory of a crowd is important for safety to prevent disasters such as stampedes or collisions. Extensive research has been conducted to explore trajectory prediction in typical crowd scenarios, where the majority of individuals can be easily identified. However, this study focuses on a more challenging scenario known as the super-crowd scene, wherein individuals within the crowd can only be annotated based on their heads. In this particular scenario, people’s re-identification process in tracking does not perform well due to a lack of clear image data. Our research proposes a clustering strategy to overcome people re-identification problems and predict the cluster crowd trajectory. Two-dimensional(2D) maps and multi-cameras will be used to capture full pictures of crowds in a location and extract the venue’s spatial data (see figure 1). The research methodology encompasses several key steps, including evaluating data extraction of the state-of-the-art methods, estimating crowd clusters, integrating 2D maps and multi-view fusion, and evaluating the proposed method on a dataset of multi-view videos collected in a real-world super-crowded scenario.
{"title":"Crowd Behaviour Prediction using Visual and Location Data in Super-Crowded Scenarios","authors":"Antonius Bima Murti Wijaya","doi":"10.1145/3577190.3614230","DOIUrl":"https://doi.org/10.1145/3577190.3614230","url":null,"abstract":"Predicting the future trajectory of a crowd is important for safety to prevent disasters such as stampedes or collisions. Extensive research has been conducted to explore trajectory prediction in typical crowd scenarios, where the majority of individuals can be easily identified. However, this study focuses on a more challenging scenario known as the super-crowd scene, wherein individuals within the crowd can only be annotated based on their heads. In this particular scenario, people’s re-identification process in tracking does not perform well due to a lack of clear image data. Our research proposes a clustering strategy to overcome people re-identification problems and predict the cluster crowd trajectory. Two-dimensional(2D) maps and multi-cameras will be used to capture full pictures of crowds in a location and extract the venue’s spatial data (see figure 1). The research methodology encompasses several key steps, including evaluating data extraction of the state-of-the-art methods, estimating crowd clusters, integrating 2D maps and multi-view fusion, and evaluating the proposed method on a dataset of multi-view videos collected in a real-world super-crowded scenario.","PeriodicalId":93171,"journal":{"name":"Companion Publication of the 2020 International Conference on Multimodal Interaction","volume":"105 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135044379","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dustin Pulver, Prithila Angkan, Paul Hungler, Ali Etemad
Cognitive load, the amount of mental effort required for task completion, plays an important role in performance and decision-making outcomes, making its classification and analysis essential in various sensitive domains. In this paper, we present a new solution for the classification of cognitive load using electroencephalogram (EEG). Our model uses a transformer architecture employing transfer learning between emotions and cognitive load. We pre-train our model using self-supervised masked autoencoding on emotion-related EEG datasets and use transfer learning with both frozen weights and fine-tuning to perform downstream cognitive load classification. To evaluate our method, we carry out a series of experiments utilizing two publicly available EEG-based emotion datasets, namely SEED and SEED-IV, for pre-training, while we use the CL-Drive dataset for downstream cognitive load classification. The results of our experiments show that our proposed approach achieves strong results and outperforms conventional single-stage fully supervised learning. Moreover, we perform detailed ablation and sensitivity studies to evaluate the impact of different aspects of our proposed solution. This research contributes to the growing body of literature in affective computing with a focus on cognitive load, and opens up new avenues for future research in the field of cross-domain transfer learning using self-supervised pre-training.
{"title":"EEG-based Cognitive Load Classification using Feature Masked Autoencoding and Emotion Transfer Learning","authors":"Dustin Pulver, Prithila Angkan, Paul Hungler, Ali Etemad","doi":"10.1145/3577190.3614113","DOIUrl":"https://doi.org/10.1145/3577190.3614113","url":null,"abstract":"Cognitive load, the amount of mental effort required for task completion, plays an important role in performance and decision-making outcomes, making its classification and analysis essential in various sensitive domains. In this paper, we present a new solution for the classification of cognitive load using electroencephalogram (EEG). Our model uses a transformer architecture employing transfer learning between emotions and cognitive load. We pre-train our model using self-supervised masked autoencoding on emotion-related EEG datasets and use transfer learning with both frozen weights and fine-tuning to perform downstream cognitive load classification. To evaluate our method, we carry out a series of experiments utilizing two publicly available EEG-based emotion datasets, namely SEED and SEED-IV, for pre-training, while we use the CL-Drive dataset for downstream cognitive load classification. The results of our experiments show that our proposed approach achieves strong results and outperforms conventional single-stage fully supervised learning. Moreover, we perform detailed ablation and sensitivity studies to evaluate the impact of different aspects of our proposed solution. This research contributes to the growing body of literature in affective computing with a focus on cognitive load, and opens up new avenues for future research in the field of cross-domain transfer learning using self-supervised pre-training.","PeriodicalId":93171,"journal":{"name":"Companion Publication of the 2020 International Conference on Multimodal Interaction","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135044387","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Large multimodal deep learning models such as Contrastive Language Image Pretraining (CLIP) have become increasingly powerful with applications across several domains in recent years. CLIP works on visual and language modalities and forms a part of several popular models, such as DALL-E and Stable Diffusion. It is trained on a large dataset of millions of image-text pairs crawled from the internet. Such large datasets are often used for training purposes without filtering, leading to models inheriting social biases from internet data. Given that models such as CLIP are being applied in such a wide variety of applications ranging from social media to education, it is vital that harmful biases are detected. However, due to the unbounded nature of the possible inputs and outputs, traditional bias metrics such as accuracy cannot detect the range and complexity of biases present in the model. In this paper, we present an audit of CLIP using an established technique from natural language processing called Word Embeddings Association Test (WEAT) to detect and quantify gender bias in CLIP and demonstrate that it can provide a quantifiable measure of such stereotypical associations. We detected, measured, and visualised various types of stereotypical gender associations with respect to character descriptions and occupations and found that CLIP shows evidence of stereotypical gender bias.
{"title":"Multimodal Bias: Assessing Gender Bias in Computer Vision Models with NLP Techniques","authors":"Abhishek Mandal, Suzanne Little, Susan Leavy","doi":"10.1145/3577190.3614156","DOIUrl":"https://doi.org/10.1145/3577190.3614156","url":null,"abstract":"Large multimodal deep learning models such as Contrastive Language Image Pretraining (CLIP) have become increasingly powerful with applications across several domains in recent years. CLIP works on visual and language modalities and forms a part of several popular models, such as DALL-E and Stable Diffusion. It is trained on a large dataset of millions of image-text pairs crawled from the internet. Such large datasets are often used for training purposes without filtering, leading to models inheriting social biases from internet data. Given that models such as CLIP are being applied in such a wide variety of applications ranging from social media to education, it is vital that harmful biases are detected. However, due to the unbounded nature of the possible inputs and outputs, traditional bias metrics such as accuracy cannot detect the range and complexity of biases present in the model. In this paper, we present an audit of CLIP using an established technique from natural language processing called Word Embeddings Association Test (WEAT) to detect and quantify gender bias in CLIP and demonstrate that it can provide a quantifiable measure of such stereotypical associations. We detected, measured, and visualised various types of stereotypical gender associations with respect to character descriptions and occupations and found that CLIP shows evidence of stereotypical gender bias.","PeriodicalId":93171,"journal":{"name":"Companion Publication of the 2020 International Conference on Multimodal Interaction","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135044918","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We propose an arbitrarily angled interactive audiovisual representation technique that combines a unique sound field synthesis with visual representation in order to augment the possibility of interactive immersive viewing experiences on mobile devices. This technique can synthesize two-channel stereo sound with constant stereo width having an arbitrary angle range from minimum 30 to maximum 360 degrees centering on an arbitrary direction from multi-channel surround sound. The visual representation can be chosen either equirectangular projection or stereographic projection. The developed video player app allows users to enjoy arbitrarily angled 360-degree videos by manipulating the touchscreen, and the stereo sound and the visual representation changes in terms of its spatial synchronization depending on the view. The app was released as a demonstration, and its acceptability and worth were investigated through interviews and subjective assessment tests. The app has been well received, and to date, more than 30 pieces of content have been produced in multiple genres, with a total of more than 200,000 views.
{"title":"Augmented Immersive Viewing and Listening Experience Based on Arbitrarily Angled Interactive Audiovisual Representation","authors":"Toshiharu Horiuchi, Shota Okubo, Tatsuya Kobayashi","doi":"10.1145/3577190.3614138","DOIUrl":"https://doi.org/10.1145/3577190.3614138","url":null,"abstract":"We propose an arbitrarily angled interactive audiovisual representation technique that combines a unique sound field synthesis with visual representation in order to augment the possibility of interactive immersive viewing experiences on mobile devices. This technique can synthesize two-channel stereo sound with constant stereo width having an arbitrary angle range from minimum 30 to maximum 360 degrees centering on an arbitrary direction from multi-channel surround sound. The visual representation can be chosen either equirectangular projection or stereographic projection. The developed video player app allows users to enjoy arbitrarily angled 360-degree videos by manipulating the touchscreen, and the stereo sound and the visual representation changes in terms of its spatial synchronization depending on the view. The app was released as a demonstration, and its acceptability and worth were investigated through interviews and subjective assessment tests. The app has been well received, and to date, more than 30 pieces of content have been produced in multiple genres, with a total of more than 200,000 views.","PeriodicalId":93171,"journal":{"name":"Companion Publication of the 2020 International Conference on Multimodal Interaction","volume":"94 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135044923","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The generation of realistic and contextually relevant co-speech gestures is a challenging yet increasingly important task in the creation of multimodal artificial agents. Prior methods focused on learning a direct correspondence between co-speech gesture representations and produced motions, which created seemingly natural but often unconvincing gestures during human assessment. We present an approach to pre-train partial gesture sequences using a generative adversarial network with a quantization pipeline. The resulting codebook vectors serve as both input and output in our framework, forming the basis for the generation and reconstruction of gestures. By learning the mapping of a latent space representation as opposed to directly mapping it to a vector representation, this framework facilitates the generation of highly realistic and expressive gestures that closely replicate human movement and behavior, while simultaneously avoiding artifacts in the generation process. We evaluate our approach by comparing it with established methods for generating co-speech gestures as well as with existing datasets of human behavior. We also perform an ablation study to assess our findings. The results show that our approach outperforms the current state of the art by a clear margin and is partially indistinguishable from human gesturing. We make our data pipeline and the generation framework publicly available.
{"title":"AQ-GT: a Temporally Aligned and Quantized GRU-Transformer for Co-Speech Gesture Synthesis","authors":"Hendric Voß, Stefan Kopp","doi":"10.1145/3577190.3614135","DOIUrl":"https://doi.org/10.1145/3577190.3614135","url":null,"abstract":"The generation of realistic and contextually relevant co-speech gestures is a challenging yet increasingly important task in the creation of multimodal artificial agents. Prior methods focused on learning a direct correspondence between co-speech gesture representations and produced motions, which created seemingly natural but often unconvincing gestures during human assessment. We present an approach to pre-train partial gesture sequences using a generative adversarial network with a quantization pipeline. The resulting codebook vectors serve as both input and output in our framework, forming the basis for the generation and reconstruction of gestures. By learning the mapping of a latent space representation as opposed to directly mapping it to a vector representation, this framework facilitates the generation of highly realistic and expressive gestures that closely replicate human movement and behavior, while simultaneously avoiding artifacts in the generation process. We evaluate our approach by comparing it with established methods for generating co-speech gestures as well as with existing datasets of human behavior. We also perform an ablation study to assess our findings. The results show that our approach outperforms the current state of the art by a clear margin and is partially indistinguishable from human gesturing. We make our data pipeline and the generation framework publicly available.","PeriodicalId":93171,"journal":{"name":"Companion Publication of the 2020 International Conference on Multimodal Interaction","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135044924","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}