Pub Date : 2022-07-18DOI: 10.1109/ICMEW56448.2022.9859363
Xiao Yang, Haonan Cheng, Hanyang Song, Li Yang, Long Ye
In this paper, we propose Emotional Acceptance Measure (EAM), an objective information communication effect evaluation (OICEE) method based on the theory of information entropy. Existing evaluation methods of information communication mostly utilize questionnaires and expert scoring, which often consume a lot of human resources. Aiming at this issue, we address the unexplored task - OICEE, and take the first step toward objective evaluation for assessing the effective results produced by the communication behavior in emotional dimension. Specifically, we construct a dataset for evaluating the information communication effect, design a CNN-BiGRU model based on the self-attention mechanism to calculate emotional information entropy, and propose a formula for calculating EAM score. For the first time, we introduce a novel way for objective evaluation of the information communication effect from the emotional dimension. The comparison experiment with manually annotated real evaluations shows that the EAM score achieves 94.41% correlation with subjective user evaluations, which proves the reasonableness and validity of our proposed objective evaluation method.
{"title":"Emotional Acceptance Measure (EAM): An Objective Evaluation Method Towards Information Communication Effect","authors":"Xiao Yang, Haonan Cheng, Hanyang Song, Li Yang, Long Ye","doi":"10.1109/ICMEW56448.2022.9859363","DOIUrl":"https://doi.org/10.1109/ICMEW56448.2022.9859363","url":null,"abstract":"In this paper, we propose Emotional Acceptance Measure (EAM), an objective information communication effect evaluation (OICEE) method based on the theory of information entropy. Existing evaluation methods of information communication mostly utilize questionnaires and expert scoring, which often consume a lot of human resources. Aiming at this issue, we address the unexplored task - OICEE, and take the first step toward objective evaluation for assessing the effective results produced by the communication behavior in emotional dimension. Specifically, we construct a dataset for evaluating the information communication effect, design a CNN-BiGRU model based on the self-attention mechanism to calculate emotional information entropy, and propose a formula for calculating EAM score. For the first time, we introduce a novel way for objective evaluation of the information communication effect from the emotional dimension. The comparison experiment with manually annotated real evaluations shows that the EAM score achieves 94.41% correlation with subjective user evaluations, which proves the reasonableness and validity of our proposed objective evaluation method.","PeriodicalId":106759,"journal":{"name":"2022 IEEE International Conference on Multimedia and Expo Workshops (ICMEW)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127209280","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-18DOI: 10.1109/ICMEW56448.2022.9859360
Weijia Wang, Xuequan Lu, Dasith de Silva Edirimuni, Xiao Liu, A. Robles-Kelly
In this demonstration paper, we show the technical details of our proposed triplet learning-based point cloud normal estimation method. Our network architecture consists of two phases: (a) feature encoding to learn representations of local patches, and (b) normal estimation that takes the learned representations as input to regress normals. We are motivated that local patches on isotropic and anisotropic surfaces respectively have similar and distinct normals, and these separable representations can be learned to facilitate normal estimation. Experiments show that our method preserves sharp features and achieves good normal estimation results especially on computer-aided design (CAD) shapes.
{"title":"Deep Point Cloud Normal Estimation via Triplet Learning (Demonstration)","authors":"Weijia Wang, Xuequan Lu, Dasith de Silva Edirimuni, Xiao Liu, A. Robles-Kelly","doi":"10.1109/ICMEW56448.2022.9859360","DOIUrl":"https://doi.org/10.1109/ICMEW56448.2022.9859360","url":null,"abstract":"In this demonstration paper, we show the technical details of our proposed triplet learning-based point cloud normal estimation method. Our network architecture consists of two phases: (a) feature encoding to learn representations of local patches, and (b) normal estimation that takes the learned representations as input to regress normals. We are motivated that local patches on isotropic and anisotropic surfaces respectively have similar and distinct normals, and these separable representations can be learned to facilitate normal estimation. Experiments show that our method preserves sharp features and achieves good normal estimation results especially on computer-aided design (CAD) shapes.","PeriodicalId":106759,"journal":{"name":"2022 IEEE International Conference on Multimedia and Expo Workshops (ICMEW)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128139786","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-18DOI: 10.1109/ICMEW56448.2022.9859408
Jun-Long Wang
In the real-world road scene, it needs a quick and powerful system to detect any traffic situation and compute prediction results in a limited hardware environment. In this paper, we present a semantic segmentation model that can perform well in the complex and dynamic Asian road scenes and evaluate the accuracy, power, and speed of MediaTek Dimensity 9000 platform. We use a model called Deep Dual-resolution Networks (DDRNets) in PyTorch, and deploy TensorFlow Lite format in MediaTek chip to assess our model. We choose the two-stage training strategy and utilize the decreasing training resolution technique to further improve results. Our team is the first-place winner in Low-power Deep Learning Semantic Segmentation Model Compression Competition for Traffic Scene in Asian Countries at IEEE International Conference on Multimedia & Expo (ICME) 2022.
{"title":"Low-Power Semantic Segmentation on Embedded Systems for Traffic in Asian Countries","authors":"Jun-Long Wang","doi":"10.1109/ICMEW56448.2022.9859408","DOIUrl":"https://doi.org/10.1109/ICMEW56448.2022.9859408","url":null,"abstract":"In the real-world road scene, it needs a quick and powerful system to detect any traffic situation and compute prediction results in a limited hardware environment. In this paper, we present a semantic segmentation model that can perform well in the complex and dynamic Asian road scenes and evaluate the accuracy, power, and speed of MediaTek Dimensity 9000 platform. We use a model called Deep Dual-resolution Networks (DDRNets) in PyTorch, and deploy TensorFlow Lite format in MediaTek chip to assess our model. We choose the two-stage training strategy and utilize the decreasing training resolution technique to further improve results. Our team is the first-place winner in Low-power Deep Learning Semantic Segmentation Model Compression Competition for Traffic Scene in Asian Countries at IEEE International Conference on Multimedia & Expo (ICME) 2022.","PeriodicalId":106759,"journal":{"name":"2022 IEEE International Conference on Multimedia and Expo Workshops (ICMEW)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121313606","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-18DOI: 10.1109/ICMEW56448.2022.9859433
Chain Yi Chu, Ho Yin Ng, Chia-Hui Lin, Ping-Hsuan Han
In this demo, we present PressyCube, an embeddable pressure sensor that allows collecting physical force data from different types of exercises and body parts via Bluetooth for home-based rehabilitation. Currently, most home-based rehabilitation devices are focused on one specific body part only. Thus, we designed the pressure sensor that can be embedded with other props to extend its applications. We also demonstrate our work for fulfilling varying rehabilitation needs in the virtual environment with a head-mounted display by a swimming simulation exergame for players to work out using different exercises.
{"title":"PressyCube: An Embeddable Pressure Sensor with Softy Prop for Limb Rehabilitation in Immersive Virtual Reality","authors":"Chain Yi Chu, Ho Yin Ng, Chia-Hui Lin, Ping-Hsuan Han","doi":"10.1109/ICMEW56448.2022.9859433","DOIUrl":"https://doi.org/10.1109/ICMEW56448.2022.9859433","url":null,"abstract":"In this demo, we present PressyCube, an embeddable pressure sensor that allows collecting physical force data from different types of exercises and body parts via Bluetooth for home-based rehabilitation. Currently, most home-based rehabilitation devices are focused on one specific body part only. Thus, we designed the pressure sensor that can be embedded with other props to extend its applications. We also demonstrate our work for fulfilling varying rehabilitation needs in the virtual environment with a head-mounted display by a swimming simulation exergame for players to work out using different exercises.","PeriodicalId":106759,"journal":{"name":"2022 IEEE International Conference on Multimedia and Expo Workshops (ICMEW)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122333461","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-18DOI: 10.1109/ICMEW56448.2022.9859324
Weihao Zhang, Jiapeng Wang, Honglin Ma, Qi Zhang, Shuqian Fan
The mass unlabeled production data hinders the large-scale application of advanced supervised learning techniques in the modern industry. Metal 3D printing generates huge amounts of in-situ data that are closely related to the forming quality of parts. In order to solve the problem of labor cost caused by re-labeling dataset when changing printing materials and process parameters, a forming quality recognition model based on deep clustering is designed, which makes the forming quality recognition task of metal 3D printing more flexible. Inspired by the success of Vision Transformer, we introduce convolutional neural networks into the Vision Transformer structure to model the inductive bias of images while learning the global representations. Our approach achieves state-of-the-art accuracy over the other Vision Transformer-based models. In addition, our proposed framework is a good candidate for specific industrial vision tasks where annotations are scarce.
{"title":"A Transformer-Based Approach for Metal 3d Printing Quality Recognition","authors":"Weihao Zhang, Jiapeng Wang, Honglin Ma, Qi Zhang, Shuqian Fan","doi":"10.1109/ICMEW56448.2022.9859324","DOIUrl":"https://doi.org/10.1109/ICMEW56448.2022.9859324","url":null,"abstract":"The mass unlabeled production data hinders the large-scale application of advanced supervised learning techniques in the modern industry. Metal 3D printing generates huge amounts of in-situ data that are closely related to the forming quality of parts. In order to solve the problem of labor cost caused by re-labeling dataset when changing printing materials and process parameters, a forming quality recognition model based on deep clustering is designed, which makes the forming quality recognition task of metal 3D printing more flexible. Inspired by the success of Vision Transformer, we introduce convolutional neural networks into the Vision Transformer structure to model the inductive bias of images while learning the global representations. Our approach achieves state-of-the-art accuracy over the other Vision Transformer-based models. In addition, our proposed framework is a good candidate for specific industrial vision tasks where annotations are scarce.","PeriodicalId":106759,"journal":{"name":"2022 IEEE International Conference on Multimedia and Expo Workshops (ICMEW)","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126834210","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-18DOI: 10.1109/ICMEW56448.2022.9859421
Weiyue Sun, Jianguo Wu, Shengcheng Yuan
Recently, deep learning models have achieved a good performance on automatic melody harmonization. However, these models often took melody note sequence as input directly without any feature extraction and analysis, causing the requirement of a large dataset to keep generalization. Inspired from the music theory of counterpoint writing, we introduce a novel musical feature called melodic skeleton, which summarizes the melody movement with strong harmony-related information. Based on the feature, a pipeline involving a skeleton analysis model is proposed for melody harmonization task. We collected a dataset by inviting musicians to annotate the skeleton tones from melodies and trained the skeleton analysis model. Experiments show a great improvement on six metrics which are commonly used in evaluating melody harmonization task, proving the effectiveness of the feature.
{"title":"Melodic Skeleton: A Musical Feature for Automatic Melody Harmonization","authors":"Weiyue Sun, Jianguo Wu, Shengcheng Yuan","doi":"10.1109/ICMEW56448.2022.9859421","DOIUrl":"https://doi.org/10.1109/ICMEW56448.2022.9859421","url":null,"abstract":"Recently, deep learning models have achieved a good performance on automatic melody harmonization. However, these models often took melody note sequence as input directly without any feature extraction and analysis, causing the requirement of a large dataset to keep generalization. Inspired from the music theory of counterpoint writing, we introduce a novel musical feature called melodic skeleton, which summarizes the melody movement with strong harmony-related information. Based on the feature, a pipeline involving a skeleton analysis model is proposed for melody harmonization task. We collected a dataset by inviting musicians to annotate the skeleton tones from melodies and trained the skeleton analysis model. Experiments show a great improvement on six metrics which are commonly used in evaluating melody harmonization task, proving the effectiveness of the feature.","PeriodicalId":106759,"journal":{"name":"2022 IEEE International Conference on Multimedia and Expo Workshops (ICMEW)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126432574","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-18DOI: 10.1109/ICMEW56448.2022.9859339
Xiaoqi Ma, Qian Yin, Xinfeng Zhang, Lv Tang
Traditional point cloud compression (PCC) methods are not effective at extremely low bit rate scenarios because of the uniform quantization. Although learning-based PCC approaches can achieve superior compression performance, they need to train multiple models for different bit rate, which greatly increases the training complexity and memory storage. To tackle these challenges, a novel FoldingNet-based Point Cloud Geometry Compression (FN-PCGC) framework is proposed in this paper. Firstly, the point cloud is divided into several descriptions by a Multiple-Description Generation (MDG) module. Then a point-based Auto-Encoder with the Multi-scale Feature Extraction (MFE) is introduced to compress all the descriptions. Experimental results show that the proposed method outperforms the MPEG G-PCC and Draco with about 30% ~ 80% gain on average.
{"title":"Foldingnet-Based Geometry Compression of Point Cloud with Multi Descriptions","authors":"Xiaoqi Ma, Qian Yin, Xinfeng Zhang, Lv Tang","doi":"10.1109/ICMEW56448.2022.9859339","DOIUrl":"https://doi.org/10.1109/ICMEW56448.2022.9859339","url":null,"abstract":"Traditional point cloud compression (PCC) methods are not effective at extremely low bit rate scenarios because of the uniform quantization. Although learning-based PCC approaches can achieve superior compression performance, they need to train multiple models for different bit rate, which greatly increases the training complexity and memory storage. To tackle these challenges, a novel FoldingNet-based Point Cloud Geometry Compression (FN-PCGC) framework is proposed in this paper. Firstly, the point cloud is divided into several descriptions by a Multiple-Description Generation (MDG) module. Then a point-based Auto-Encoder with the Multi-scale Feature Extraction (MFE) is introduced to compress all the descriptions. Experimental results show that the proposed method outperforms the MPEG G-PCC and Draco with about 30% ~ 80% gain on average.","PeriodicalId":106759,"journal":{"name":"2022 IEEE International Conference on Multimedia and Expo Workshops (ICMEW)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129072510","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this demo, we present ExBrainable, an open-source application dedicated to modeling, evaluating and visualizing explainable CNN-based models on EEG data for brain/neuroscience research. We have implemented the functions including EEG data loading, model training, evaluation and parameter visualization. The application is also built with a model base including representative convolutional neural network architectures for users to implement without any programming. With its easy-to-use graphic user interface (GUI), it is completely available for investigators of different disciplines with limited resource and limited programming skill. Starting with preprocessed EEG data, users can quickly train the desired model, evaluate the performance, and finally visualize features learned by the model with no pain.
{"title":"Accelerating Brain Research using Explainable Artificial Intelligence","authors":"Jing-Lun Chou, Ya-Lin Huang, Chia-Ying Hsieh, Jian-Xue Huang, Chunshan Wei","doi":"10.1109/ICMEW56448.2022.9859322","DOIUrl":"https://doi.org/10.1109/ICMEW56448.2022.9859322","url":null,"abstract":"In this demo, we present ExBrainable, an open-source application dedicated to modeling, evaluating and visualizing explainable CNN-based models on EEG data for brain/neuroscience research. We have implemented the functions including EEG data loading, model training, evaluation and parameter visualization. The application is also built with a model base including representative convolutional neural network architectures for users to implement without any programming. With its easy-to-use graphic user interface (GUI), it is completely available for investigators of different disciplines with limited resource and limited programming skill. Starting with preprocessed EEG data, users can quickly train the desired model, evaluate the performance, and finally visualize features learned by the model with no pain.","PeriodicalId":106759,"journal":{"name":"2022 IEEE International Conference on Multimedia and Expo Workshops (ICMEW)","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133658190","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-18DOI: 10.1109/ICMEW56448.2022.9859531
Yi-Jr Liao, Wang Yue, Yuqing Jian, Zijun Wang, Yuchong Gao, Chenhao Lu
In this work, we address the task of multi-instrument music generation. Notably, along with the development of artificial neural networks, deep learning has become a leading technique to accelerate the automatic music generation and is featured in many previous papers like MuseGan[1], MusicBert[2], and PopMAG[3]. However, seldom of them implement a well-designed representation of multi-instrumental music, and no model perfectly introduces a prior knowledge of music theory. In this paper, we leverage the Compound Word[4] and R-drop[5] method to work on multi-instrument music generation tasks. Objective and subjective evaluations show that the generated music has cost less training time, and achieved prominent music quality.
{"title":"MICW: A Multi-Instrument Music Generation Model Based on the Improved Compound Word","authors":"Yi-Jr Liao, Wang Yue, Yuqing Jian, Zijun Wang, Yuchong Gao, Chenhao Lu","doi":"10.1109/ICMEW56448.2022.9859531","DOIUrl":"https://doi.org/10.1109/ICMEW56448.2022.9859531","url":null,"abstract":"In this work, we address the task of multi-instrument music generation. Notably, along with the development of artificial neural networks, deep learning has become a leading technique to accelerate the automatic music generation and is featured in many previous papers like MuseGan[1], MusicBert[2], and PopMAG[3]. However, seldom of them implement a well-designed representation of multi-instrumental music, and no model perfectly introduces a prior knowledge of music theory. In this paper, we leverage the Compound Word[4] and R-drop[5] method to work on multi-instrument music generation tasks. Objective and subjective evaluations show that the generated music has cost less training time, and achieved prominent music quality.","PeriodicalId":106759,"journal":{"name":"2022 IEEE International Conference on Multimedia and Expo Workshops (ICMEW)","volume":"98 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132587647","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Live-streaming advertising has achieved huge success in modern retail platforms. However, small-scaled merchants are neither economically nor technically capable of having their own spokes-person. Addressing the need for the massive online interactive advertising, this paper proposes an economic-efficient approach, Virtual spokes-Character Advertising (VSCA). VSCA generates 2-D Virtual spokes-Character advertising video and provides it to the merchants as a supplementary marketing method. VSCA first generates the simplified natural language description of the merchandise from its original long title using text generation methods and then passes it to the Text-to-Speech model for the audio description. Secondly, VSCA remits the audio to our remodeled two-phases lip-syncing network to generate virtual advertising videos about the given merchandise. With our novelly designed two-phases lip-syncing network, it is the first in the industry able to generate lip-syncing video of given audio with human face image input instead of video input. As the industry’s first application on 2D spokes-character advertising, VSCA has its large potential in real world applications.
{"title":"A Low-Cost Virtual 2D Spokes-Character Advertising Framework","authors":"Jiarun Zhang, Zhao Li, Jialun Zhang, Zhiqiang Zhang","doi":"10.1109/ICMEW56448.2022.9859278","DOIUrl":"https://doi.org/10.1109/ICMEW56448.2022.9859278","url":null,"abstract":"Live-streaming advertising has achieved huge success in modern retail platforms. However, small-scaled merchants are neither economically nor technically capable of having their own spokes-person. Addressing the need for the massive online interactive advertising, this paper proposes an economic-efficient approach, Virtual spokes-Character Advertising (VSCA). VSCA generates 2-D Virtual spokes-Character advertising video and provides it to the merchants as a supplementary marketing method. VSCA first generates the simplified natural language description of the merchandise from its original long title using text generation methods and then passes it to the Text-to-Speech model for the audio description. Secondly, VSCA remits the audio to our remodeled two-phases lip-syncing network to generate virtual advertising videos about the given merchandise. With our novelly designed two-phases lip-syncing network, it is the first in the industry able to generate lip-syncing video of given audio with human face image input instead of video input. As the industry’s first application on 2D spokes-character advertising, VSCA has its large potential in real world applications.","PeriodicalId":106759,"journal":{"name":"2022 IEEE International Conference on Multimedia and Expo Workshops (ICMEW)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128173414","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}