Pub Date : 2021-12-01DOI: 10.1109/CSCI54926.2021.00040
Won-Chul Shin, Hyun-Kyu Park, Youngtack Park
In knowledge graph completion, a symbolic reasoning method establishes a human readable rule by analyzing an imperfect knowledge graph and infers knowledge omitted by an inference engine. However, the entire rules cannot be defined based on a large-scale knowledge graph. This study proposes a method, based on a knowledge graph, that can facilitate end-to-end learning and induce rules without several processing steps that require direct human involvement. The proposed method combines the concept of unification used in symbolic reasoning and deep learning for training vectors expressing symbols. It trains the vectors expressing relations of rule schemas defined to induce rules based on a given knowledge graph. Furthermore, the performance of the proposed method is evaluated against neural theorem prover and the greedy neural theorem prover, which are recently developed neuro-symbolic models, based on four benchmark datasets. The experimental results verify that the proposed method induces more significant rules in less training time. Furthermore, this study conducted an experiment on knowledge graph completion, implemented by an inference engine. Based on the experiment results, it was confirmed that the rules induced by the proposed model can indeed effectively complete missing knowledge.
{"title":"Knowledge Inference and Knowledge Completion Methods using Neuro-Symbolic Inductive Rules","authors":"Won-Chul Shin, Hyun-Kyu Park, Youngtack Park","doi":"10.1109/CSCI54926.2021.00040","DOIUrl":"https://doi.org/10.1109/CSCI54926.2021.00040","url":null,"abstract":"In knowledge graph completion, a symbolic reasoning method establishes a human readable rule by analyzing an imperfect knowledge graph and infers knowledge omitted by an inference engine. However, the entire rules cannot be defined based on a large-scale knowledge graph. This study proposes a method, based on a knowledge graph, that can facilitate end-to-end learning and induce rules without several processing steps that require direct human involvement. The proposed method combines the concept of unification used in symbolic reasoning and deep learning for training vectors expressing symbols. It trains the vectors expressing relations of rule schemas defined to induce rules based on a given knowledge graph. Furthermore, the performance of the proposed method is evaluated against neural theorem prover and the greedy neural theorem prover, which are recently developed neuro-symbolic models, based on four benchmark datasets. The experimental results verify that the proposed method induces more significant rules in less training time. Furthermore, this study conducted an experiment on knowledge graph completion, implemented by an inference engine. Based on the experiment results, it was confirmed that the rules induced by the proposed model can indeed effectively complete missing knowledge.","PeriodicalId":206881,"journal":{"name":"2021 International Conference on Computational Science and Computational Intelligence (CSCI)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114965216","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-01DOI: 10.1109/csci54926.2021.00057
Tyson Howard, Z. Wang
The goal for this project is to create a simple and efficient online platform for beginner basketball players who have a passion for learning the fundamentals of basketball. The system website gives users the ability to learn basic skills like dribbling, shooting, passing, and defense. An admins page is created so the administrator can check on the players’ progress and answer any questions they may have. All of these functions were created by using HTML, PHP, CSS, BOOTSTRAP, and MYSQL.
{"title":"Virtual Basketball Training Platform","authors":"Tyson Howard, Z. Wang","doi":"10.1109/csci54926.2021.00057","DOIUrl":"https://doi.org/10.1109/csci54926.2021.00057","url":null,"abstract":"The goal for this project is to create a simple and efficient online platform for beginner basketball players who have a passion for learning the fundamentals of basketball. The system website gives users the ability to learn basic skills like dribbling, shooting, passing, and defense. An admins page is created so the administrator can check on the players’ progress and answer any questions they may have. All of these functions were created by using HTML, PHP, CSS, BOOTSTRAP, and MYSQL.","PeriodicalId":206881,"journal":{"name":"2021 International Conference on Computational Science and Computational Intelligence (CSCI)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115720531","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-01DOI: 10.1109/CSCI54926.2021.00318
Luis M. Pereira, A. Salazar, L. Vergara
This paper presents a proposal of new analyses for data from functional magnetic resonance images and electroencephalographic signals acquired simultaneously. Considering the current state of the art in this field, the methodology is proposed in the context of multimodal fusion that can be applied in early and/or late stages of the processing. Several problems such as spatial and time synchronization of the data and possible solutions to deal with them based on over-sampling or under-sampling are discussed. The principal objective of this ongoing research consists of increasing temporal and spatial resolution for recognition of activation zones of the brain (zones of interest) during cognitive tasks. Some preliminary results of 3D reconstruction of the brain volume from electroencephalographic signals of a subject carrying out an oddball task are included.
{"title":"Simultaneous analysis of fMRI and EEG biosignals: a multimodal fusion approach","authors":"Luis M. Pereira, A. Salazar, L. Vergara","doi":"10.1109/CSCI54926.2021.00318","DOIUrl":"https://doi.org/10.1109/CSCI54926.2021.00318","url":null,"abstract":"This paper presents a proposal of new analyses for data from functional magnetic resonance images and electroencephalographic signals acquired simultaneously. Considering the current state of the art in this field, the methodology is proposed in the context of multimodal fusion that can be applied in early and/or late stages of the processing. Several problems such as spatial and time synchronization of the data and possible solutions to deal with them based on over-sampling or under-sampling are discussed. The principal objective of this ongoing research consists of increasing temporal and spatial resolution for recognition of activation zones of the brain (zones of interest) during cognitive tasks. Some preliminary results of 3D reconstruction of the brain volume from electroencephalographic signals of a subject carrying out an oddball task are included.","PeriodicalId":206881,"journal":{"name":"2021 International Conference on Computational Science and Computational Intelligence (CSCI)","volume":"249 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120881930","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-01DOI: 10.1109/CSCI54926.2021.00089
Dena F. Mujtaba, N. Mahapatra
Artificial intelligence (AI) is being increasingly integrated into the hiring process. A prominent example is video interviews used by large organizations to quickly screen job candidates. The personality traits of job candidates, such as the Big Five characteristics, are predicted using computer vision and affective computing approaches. Past methods have used feature extraction, text analysis, and other multimodal methods to achieve a high prediction accuracy. We build upon past approaches by using a multi-task deep neural network (MTDNN) to predict personality traits and job interview scores of individuals. An MTDNN shares lower layers to learn features which apply across outputs, and contains task-specific layers to predict each individual trait, thereby providing an advantage over single-task approaches since personality traits are determined by features (e.g., emotion, gestures, and speech) shared across traits. Our model is trained using the CVPR 2017 First Impressions V2 competition dataset, containing 10,000 videos of individuals and their Big Five personality and interview scores. We also use scene, audio, and facial features from the state-of-the-art model from the competition. A 5-fold cross-validation approach is used to evaluate our results. We achieve a prediction accuracy for all traits on par with state-of-the-art models, while reducing training time and parameter tuning to a single network.
{"title":"Multi-Task Deep Neural Networks for Multimodal Personality Trait Prediction","authors":"Dena F. Mujtaba, N. Mahapatra","doi":"10.1109/CSCI54926.2021.00089","DOIUrl":"https://doi.org/10.1109/CSCI54926.2021.00089","url":null,"abstract":"Artificial intelligence (AI) is being increasingly integrated into the hiring process. A prominent example is video interviews used by large organizations to quickly screen job candidates. The personality traits of job candidates, such as the Big Five characteristics, are predicted using computer vision and affective computing approaches. Past methods have used feature extraction, text analysis, and other multimodal methods to achieve a high prediction accuracy. We build upon past approaches by using a multi-task deep neural network (MTDNN) to predict personality traits and job interview scores of individuals. An MTDNN shares lower layers to learn features which apply across outputs, and contains task-specific layers to predict each individual trait, thereby providing an advantage over single-task approaches since personality traits are determined by features (e.g., emotion, gestures, and speech) shared across traits. Our model is trained using the CVPR 2017 First Impressions V2 competition dataset, containing 10,000 videos of individuals and their Big Five personality and interview scores. We also use scene, audio, and facial features from the state-of-the-art model from the competition. A 5-fold cross-validation approach is used to evaluate our results. We achieve a prediction accuracy for all traits on par with state-of-the-art models, while reducing training time and parameter tuning to a single network.","PeriodicalId":206881,"journal":{"name":"2021 International Conference on Computational Science and Computational Intelligence (CSCI)","volume":"10 6","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121002129","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-01DOI: 10.1109/CSCI54926.2021.00244
Patrick Philipp, L. Hempel, D. Hempel, Jürgen Beyerer
Clinical Practice Guidelines (CPGs) contain ex-pert knowledge on the diagnosis and treatment of diseases. They can be regarded as state of the art and standardized procedures that have been established by consensus of the clinical expert community. In this work, we show how CPGs can be formalized by activities of the Unified Modeling Language (UML), and can subsequently be translated into PROforma models. UML activities allow for a comprehensible representation of the underlying process, whereas PROforma models can be directly executed in a dialog system and support the practitioner during the diagnosis or treatment process. In this work, we expand our approach from [1] to include more complex diseases like Primary Myelofribrosis (PMF) and Immune Thrombocytopenia (ITP) and show the applicability for exemplary patients.
{"title":"Modeling Clinical Practice Guidelines for Interactive Decision Support Exemplified by Primary Myelofibrosis and Immune Thrombocytopenia","authors":"Patrick Philipp, L. Hempel, D. Hempel, Jürgen Beyerer","doi":"10.1109/CSCI54926.2021.00244","DOIUrl":"https://doi.org/10.1109/CSCI54926.2021.00244","url":null,"abstract":"Clinical Practice Guidelines (CPGs) contain ex-pert knowledge on the diagnosis and treatment of diseases. They can be regarded as state of the art and standardized procedures that have been established by consensus of the clinical expert community. In this work, we show how CPGs can be formalized by activities of the Unified Modeling Language (UML), and can subsequently be translated into PROforma models. UML activities allow for a comprehensible representation of the underlying process, whereas PROforma models can be directly executed in a dialog system and support the practitioner during the diagnosis or treatment process. In this work, we expand our approach from [1] to include more complex diseases like Primary Myelofribrosis (PMF) and Immune Thrombocytopenia (ITP) and show the applicability for exemplary patients.","PeriodicalId":206881,"journal":{"name":"2021 International Conference on Computational Science and Computational Intelligence (CSCI)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127515093","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-01DOI: 10.1109/CSCI54926.2021.00331
I. Obagbuwa, Stefany Bam, Dineo Tiffany Moroka
This work sought to investigate how face recognition can be implemented in MATLAB to correctly detect and identify an individual using their face. Face recognition is a biometric technology that is used to recognize and authenticate a detected face in images or videos. This technology can be used in various industries for various purposes. The main goal of this work is to correctly authenticate an individual face using Convolutional Neural Networks (called AlexNet) in MATLAB.
{"title":"Face Recognition Using MATLAB","authors":"I. Obagbuwa, Stefany Bam, Dineo Tiffany Moroka","doi":"10.1109/CSCI54926.2021.00331","DOIUrl":"https://doi.org/10.1109/CSCI54926.2021.00331","url":null,"abstract":"This work sought to investigate how face recognition can be implemented in MATLAB to correctly detect and identify an individual using their face. Face recognition is a biometric technology that is used to recognize and authenticate a detected face in images or videos. This technology can be used in various industries for various purposes. The main goal of this work is to correctly authenticate an individual face using Convolutional Neural Networks (called AlexNet) in MATLAB.","PeriodicalId":206881,"journal":{"name":"2021 International Conference on Computational Science and Computational Intelligence (CSCI)","volume":"504 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124849989","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-01DOI: 10.1109/CSCI54926.2021.00128
Payal Patel, K. Passi, Chakresh Kumar Jain
Cancer being one of the most vital diseases in the medical history needs adequate focus on its causes, symptoms and detection. Various algorithms and software have been designed so far to predict the cancer at cellular level. The most crucial aspect for sorting the cancerous tissues is the classification of such tissues based on the gene expression data. Gene expression data consists of high amount of genetic data as compared to the number of data samples. Thus, sample size and dimensions are a major challenge for researchers. In this work, four different types of cancer microarray datasets are analyzed viz., breast cancer, lung cancer, leukemia and colon cancer. The analysis of the cancer microarray datasets was done using various nature-inspired algorithms like Grasshopper Optimization (GOA), Particle Swarm Optimization (PSO), and Interval Value-based Particle Swarm Optimization (IVPSO). To study the accuracy of the prediction, five different classifiers were used: Random Forest, K-Nearest Neighborhood (KNN), Neural Network, Naïve Bayes and Support Vector Machine (SVM). The Grasshopper Optimization (GOA) outperforms in accuracy compared to the other two optimization algorithms with SVM classifier on leukemia, lung and breast cancer datasets selecting the best genes/attributes to correctly classify the dataset.
{"title":"Improving Gene Expression Prediction of Cancer Data Using Nature Inspired Optimization Algorithms","authors":"Payal Patel, K. Passi, Chakresh Kumar Jain","doi":"10.1109/CSCI54926.2021.00128","DOIUrl":"https://doi.org/10.1109/CSCI54926.2021.00128","url":null,"abstract":"Cancer being one of the most vital diseases in the medical history needs adequate focus on its causes, symptoms and detection. Various algorithms and software have been designed so far to predict the cancer at cellular level. The most crucial aspect for sorting the cancerous tissues is the classification of such tissues based on the gene expression data. Gene expression data consists of high amount of genetic data as compared to the number of data samples. Thus, sample size and dimensions are a major challenge for researchers. In this work, four different types of cancer microarray datasets are analyzed viz., breast cancer, lung cancer, leukemia and colon cancer. The analysis of the cancer microarray datasets was done using various nature-inspired algorithms like Grasshopper Optimization (GOA), Particle Swarm Optimization (PSO), and Interval Value-based Particle Swarm Optimization (IVPSO). To study the accuracy of the prediction, five different classifiers were used: Random Forest, K-Nearest Neighborhood (KNN), Neural Network, Naïve Bayes and Support Vector Machine (SVM). The Grasshopper Optimization (GOA) outperforms in accuracy compared to the other two optimization algorithms with SVM classifier on leukemia, lung and breast cancer datasets selecting the best genes/attributes to correctly classify the dataset.","PeriodicalId":206881,"journal":{"name":"2021 International Conference on Computational Science and Computational Intelligence (CSCI)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124909711","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-01DOI: 10.1109/CSCI54926.2021.00273
Glen W. Sagers
Wi-Fi Protected Access version 3 (WPA3) is the newest security standard for wireless networks. Ratified in 2018, and mandatory for devices bearing the Wi-Fi trademark since July of 2020, the protocol has many security improvements over previous versions. It has better encryption and key sharing than the older WPA2 protocol. Unfortunately, adoption of WPA3 is likely to be very slow, just like its predecessors. These delays have nothing to do with the protocol, and everything to do with human factors and legacy systems. Many users do not understand either why they need new security measures, or how to implement them. Legacy systems, specifically Internet of Things (IoT) devices which can only connect to WPA2 networks, are widespread, and probably will not be updated. This paper is a call for industry awareness and action.
WPA3 (Wi-Fi Protected Access version 3)是最新的无线网络安全标准。该协议于2018年获得批准,自2020年7月起强制用于带有Wi-Fi商标的设备,与以前的版本相比,该协议在安全性方面有许多改进。它具有比旧的WPA2协议更好的加密和密钥共享功能。不幸的是,WPA3的采用可能非常缓慢,就像它的前辈一样。这些延迟与协议无关,而是与人为因素和遗留系统有关。许多用户既不明白他们为什么需要新的安全措施,也不明白如何实现它们。遗留系统,特别是只能连接到WPA2网络的物联网(IoT)设备,很可能不会更新。本文是对行业意识和行动的呼吁。
{"title":"WPA3: The Greatest Security Protocol That May Never Be","authors":"Glen W. Sagers","doi":"10.1109/CSCI54926.2021.00273","DOIUrl":"https://doi.org/10.1109/CSCI54926.2021.00273","url":null,"abstract":"Wi-Fi Protected Access version 3 (WPA3) is the newest security standard for wireless networks. Ratified in 2018, and mandatory for devices bearing the Wi-Fi trademark since July of 2020, the protocol has many security improvements over previous versions. It has better encryption and key sharing than the older WPA2 protocol. Unfortunately, adoption of WPA3 is likely to be very slow, just like its predecessors. These delays have nothing to do with the protocol, and everything to do with human factors and legacy systems. Many users do not understand either why they need new security measures, or how to implement them. Legacy systems, specifically Internet of Things (IoT) devices which can only connect to WPA2 networks, are widespread, and probably will not be updated. This paper is a call for industry awareness and action.","PeriodicalId":206881,"journal":{"name":"2021 International Conference on Computational Science and Computational Intelligence (CSCI)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125142477","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Industry Connect Initiative of the Computer Science Department at Southeastern is a four-pronged approach to connect students to real world ready skills and relevant topics. This approach includes an industry advisory board, the distinguished lecture series, an internship program and curriculum opportunities [1]. These four pieces along with a partnership with the university’s Career Services Department and Workforce Talent Initiative provide the resources necessary for graduates to be highly sought after for employment. This poster paper presents an overview of the Industry Connect Initiative.
{"title":"Industry Connect Initiative: Partnering for Student Success","authors":"Achee Bonnie, Alkadi Ghassan, McNulty Matthew, Summers Sandy","doi":"10.1109/CSCI54926.2021.00241","DOIUrl":"https://doi.org/10.1109/CSCI54926.2021.00241","url":null,"abstract":"The Industry Connect Initiative of the Computer Science Department at Southeastern is a four-pronged approach to connect students to real world ready skills and relevant topics. This approach includes an industry advisory board, the distinguished lecture series, an internship program and curriculum opportunities [1]. These four pieces along with a partnership with the university’s Career Services Department and Workforce Talent Initiative provide the resources necessary for graduates to be highly sought after for employment. This poster paper presents an overview of the Industry Connect Initiative.","PeriodicalId":206881,"journal":{"name":"2021 International Conference on Computational Science and Computational Intelligence (CSCI)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125866961","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-01DOI: 10.1109/CSCI54926.2021.00173
Shinjin Kang, Jong-In Choi, Hyunjeong Tae, Sookyun Kim
This paper proposes a low-cost production and utilization technique for labeling emotion data in game engines, which can be used to support rapidly developing deep learning technologies. The proposed system extracts realistic images from game environments and automatically creates quantified two-dimensional (2D) emotion segmentation images linked to the extracted images. The segmentation data are learned through an image-to-image translation network. This 2D emotion segmentation mapping technique is trained using many training data, which allows stable learning. Industries that require spatial emotion interpretation can utilize the results of this study.
{"title":"Game Engine Based 2D Emotion Segmentation Generation Method","authors":"Shinjin Kang, Jong-In Choi, Hyunjeong Tae, Sookyun Kim","doi":"10.1109/CSCI54926.2021.00173","DOIUrl":"https://doi.org/10.1109/CSCI54926.2021.00173","url":null,"abstract":"This paper proposes a low-cost production and utilization technique for labeling emotion data in game engines, which can be used to support rapidly developing deep learning technologies. The proposed system extracts realistic images from game environments and automatically creates quantified two-dimensional (2D) emotion segmentation images linked to the extracted images. The segmentation data are learned through an image-to-image translation network. This 2D emotion segmentation mapping technique is trained using many training data, which allows stable learning. Industries that require spatial emotion interpretation can utilize the results of this study.","PeriodicalId":206881,"journal":{"name":"2021 International Conference on Computational Science and Computational Intelligence (CSCI)","volume":"9 9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125901486","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}