Dialogue system has made great progress recently, but it is still in the initial stage of passive reply. How to build a dialogue model with proactive reply ability is a great challenge. This paper proposes an End-to-End dialogue model based on Memory network and Graph Neural Network, which uses memory network to store conversation history and knowledge, and uses Graph Neural Network to encode background knowledge. We propose a soft weighting mechanism to integrate the dialogue goal information into the query pointer, so as to enhance the dynamic topic transfer ability during decoding. Experimental results indicate that our model outperforms various kinds of generation models under automatic evaluations and can accomplish the conversational target more actively
{"title":"Multi-Hop Memory Network with Graph Neural Networks Encoding for Proactive Dialogue","authors":"Haonan Yuan, Jinqi An","doi":"10.1145/3404555.3404605","DOIUrl":"https://doi.org/10.1145/3404555.3404605","url":null,"abstract":"Dialogue system has made great progress recently, but it is still in the initial stage of passive reply. How to build a dialogue model with proactive reply ability is a great challenge. This paper proposes an End-to-End dialogue model based on Memory network and Graph Neural Network, which uses memory network to store conversation history and knowledge, and uses Graph Neural Network to encode background knowledge. We propose a soft weighting mechanism to integrate the dialogue goal information into the query pointer, so as to enhance the dynamic topic transfer ability during decoding. Experimental results indicate that our model outperforms various kinds of generation models under automatic evaluations and can accomplish the conversational target more actively","PeriodicalId":220526,"journal":{"name":"Proceedings of the 2020 6th International Conference on Computing and Artificial Intelligence","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132289819","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sentiment analysis concept is referring to natural language processing in lots of domains in order to find which kind of subjective data or information it expresses. Sentiment analysis recently used as a method in online shops to identify about type of buyers review and comment and an impression about services and products. In this research, we propose an adjustable sentiment analysis algorithm for real time analysis on user generated data on products in online shops with a UI that run on local shops as a friendly tool. The proposed model builds a dynamic dictionary from buyers comment and reviews gathered from online shops firstly using selected set of admin-based features extracted from a specific product (or top of a products in a category), then classifying these preprocessed data under predefined classes. According to best knowledge of authors the proposed method introduces new features vectors that strongly increase accuracy and trustworthiness in analyzing online shop reviews and comments with a low time overhead. Our extensive simulation result especially combination with an online shop for real data shows the improved accuracy and fine tuning of the polarity rank for online shops manager.
{"title":"A Novel Large-scale Model for Real Time Sentiment Analysis Using Online Shop Reviews and Comments","authors":"Fereshteh Ghorbanian, Mehrdad Jalali","doi":"10.1145/3404555.3404646","DOIUrl":"https://doi.org/10.1145/3404555.3404646","url":null,"abstract":"Sentiment analysis concept is referring to natural language processing in lots of domains in order to find which kind of subjective data or information it expresses. Sentiment analysis recently used as a method in online shops to identify about type of buyers review and comment and an impression about services and products. In this research, we propose an adjustable sentiment analysis algorithm for real time analysis on user generated data on products in online shops with a UI that run on local shops as a friendly tool. The proposed model builds a dynamic dictionary from buyers comment and reviews gathered from online shops firstly using selected set of admin-based features extracted from a specific product (or top of a products in a category), then classifying these preprocessed data under predefined classes. According to best knowledge of authors the proposed method introduces new features vectors that strongly increase accuracy and trustworthiness in analyzing online shop reviews and comments with a low time overhead. Our extensive simulation result especially combination with an online shop for real data shows the improved accuracy and fine tuning of the polarity rank for online shops manager.","PeriodicalId":220526,"journal":{"name":"Proceedings of the 2020 6th International Conference on Computing and Artificial Intelligence","volume":"116 11","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120842841","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Automatic fake news detection is a challenging problem which needs a number of verifiable facts support back. Wang et al. [16] introduced LIAR, a validated dataset, and presented a six classes classification task with several popular machine learning methods to detect fake news in linguistic level. However, empirical results have shown that the CNN and RNN based model can not perform very well especially when integrating all features with claim. In this paper, we are the first to present a method to build up a BERT-based [4] mental model to capture the mental feature in fake news detection. In details, we present a method to construct a patterned text in linguistic level to integrate the claim and features appropriately. Then we fine-tune the BERT model with all features integrated text. Empirical results show that our method provides significant improvement over the state-of-art model based on the LIAR dataset we have known by 16.71% in accuracy.
{"title":"BERT-Based Mental Model, a Better Fake News Detector","authors":"Jia Ding, Yongjun Hu, Huiyou Chang","doi":"10.1145/3404555.3404607","DOIUrl":"https://doi.org/10.1145/3404555.3404607","url":null,"abstract":"Automatic fake news detection is a challenging problem which needs a number of verifiable facts support back. Wang et al. [16] introduced LIAR, a validated dataset, and presented a six classes classification task with several popular machine learning methods to detect fake news in linguistic level. However, empirical results have shown that the CNN and RNN based model can not perform very well especially when integrating all features with claim. In this paper, we are the first to present a method to build up a BERT-based [4] mental model to capture the mental feature in fake news detection. In details, we present a method to construct a patterned text in linguistic level to integrate the claim and features appropriately. Then we fine-tune the BERT model with all features integrated text. Empirical results show that our method provides significant improvement over the state-of-art model based on the LIAR dataset we have known by 16.71% in accuracy.","PeriodicalId":220526,"journal":{"name":"Proceedings of the 2020 6th International Conference on Computing and Artificial Intelligence","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123724006","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the field of natural language processing, recurrent neural networks are good at capturing long-range dependent information and can effectively complete text classification tasks. However, Recurrent neural network is model the entire sentence in the process of text feature extraction, which easily ignores the deep semantic information of the local phrase of the text. To further enhance the expressiveness of text features, we propose a text classification model base on region embedding and LSTM (RELSTM). RELSTM first divides regions for text and then generates region embedding. We introduce the learnable local context unit(LCU) to calculate the relative position information of the middle word and its influence on the context words in the region, and obtain a region matrix representation. In order to reduce the complexity of the model, the max pooling operation is applied to the region matrix and we obtain a dense region embedding. Then, we use LSTM's long-term memory of text information to extract the global characteristics. The model is verified on public data sets, and the results are compared using 5 benchmark models. Experimental results on three dataset show that RELSTM has better overall performance and is effective in improving the accuracy of text classification compared with traditional deep learning models.
{"title":"A Text Classification Model Base On Region Embedding AND LSTM","authors":"Ying Li, Ming Ye","doi":"10.1145/3404555.3404643","DOIUrl":"https://doi.org/10.1145/3404555.3404643","url":null,"abstract":"In the field of natural language processing, recurrent neural networks are good at capturing long-range dependent information and can effectively complete text classification tasks. However, Recurrent neural network is model the entire sentence in the process of text feature extraction, which easily ignores the deep semantic information of the local phrase of the text. To further enhance the expressiveness of text features, we propose a text classification model base on region embedding and LSTM (RELSTM). RELSTM first divides regions for text and then generates region embedding. We introduce the learnable local context unit(LCU) to calculate the relative position information of the middle word and its influence on the context words in the region, and obtain a region matrix representation. In order to reduce the complexity of the model, the max pooling operation is applied to the region matrix and we obtain a dense region embedding. Then, we use LSTM's long-term memory of text information to extract the global characteristics. The model is verified on public data sets, and the results are compared using 5 benchmark models. Experimental results on three dataset show that RELSTM has better overall performance and is effective in improving the accuracy of text classification compared with traditional deep learning models.","PeriodicalId":220526,"journal":{"name":"Proceedings of the 2020 6th International Conference on Computing and Artificial Intelligence","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124938158","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Semantic segmentation is a challenging task which can be formulated as a pixel-wise classification problem. Most FCN-based methods of semantic segmentation apply simple bilinear up-sampling to recover the final pixel-wise prediction, which may lead to misclassification near the object edges. To solve this problem, we focus on the supplementary spatial details of semantic segmentation using edge information. We present an approach to incorporate the relevant auxiliary edge information to semantic segmentation features. By applying the explicit supervision of semantic boundary using intermediate features, the multi-tasks network learns features with strong inter-class distinctive ability. The attention-based feature fusion module fuses the high-resolution edge features with wide-receptive-field semantic features to sufficiently leverage the complementary information. Experiments on the Cityscapes dataset show the effectiveness of fusing intermediate edge information.
{"title":"Auxiliary Edge Detection for Semantic Image Segmentation","authors":"Wenrui Liu, Zongqing Lu, He Xu","doi":"10.1145/3404555.3404624","DOIUrl":"https://doi.org/10.1145/3404555.3404624","url":null,"abstract":"Semantic segmentation is a challenging task which can be formulated as a pixel-wise classification problem. Most FCN-based methods of semantic segmentation apply simple bilinear up-sampling to recover the final pixel-wise prediction, which may lead to misclassification near the object edges. To solve this problem, we focus on the supplementary spatial details of semantic segmentation using edge information. We present an approach to incorporate the relevant auxiliary edge information to semantic segmentation features. By applying the explicit supervision of semantic boundary using intermediate features, the multi-tasks network learns features with strong inter-class distinctive ability. The attention-based feature fusion module fuses the high-resolution edge features with wide-receptive-field semantic features to sufficiently leverage the complementary information. Experiments on the Cityscapes dataset show the effectiveness of fusing intermediate edge information.","PeriodicalId":220526,"journal":{"name":"Proceedings of the 2020 6th International Conference on Computing and Artificial Intelligence","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114622281","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recent years have witnessed the effective of attention network based on two-stream for video action recognition. However, most methods adopt the same structure on spatial stream and temporal stream, which produce amount redundant information and often ignore the relevance among channels. In this paper, we propose a channel-wise spatial attention with spatiotemporal heterogeneous framework, a new approach to action recognition. First, we employ two different network structures for spatial stream and temporal stream to improve the performance of action recognition. Then, we design a channel-wise network and spatial network inspired by self-attention mechanism to obtain the fine-grained and salient information of the video. Finally, the feature of video for action recognition is generated by end-to-end training. Experimental results on the datasets HMDB51 and UCF101 shows our method can effectively recognize the actions in the video.
{"title":"Channel-Wise Spatial Attention with Spatiotemporal Heterogeneous Framework for Action Recognition","authors":"Yiying Li, Yulin Li, Yanfei Gu","doi":"10.1145/3404555.3404592","DOIUrl":"https://doi.org/10.1145/3404555.3404592","url":null,"abstract":"Recent years have witnessed the effective of attention network based on two-stream for video action recognition. However, most methods adopt the same structure on spatial stream and temporal stream, which produce amount redundant information and often ignore the relevance among channels. In this paper, we propose a channel-wise spatial attention with spatiotemporal heterogeneous framework, a new approach to action recognition. First, we employ two different network structures for spatial stream and temporal stream to improve the performance of action recognition. Then, we design a channel-wise network and spatial network inspired by self-attention mechanism to obtain the fine-grained and salient information of the video. Finally, the feature of video for action recognition is generated by end-to-end training. Experimental results on the datasets HMDB51 and UCF101 shows our method can effectively recognize the actions in the video.","PeriodicalId":220526,"journal":{"name":"Proceedings of the 2020 6th International Conference on Computing and Artificial Intelligence","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130295912","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a generative model applied to graph-structured data for node generation by incorporating the graph convolutional architecture and semi-supervised learning with variational auto-encoder. This idea is motivated by successful applications of deep generative models for images and speeches. However, when applied to graph-structured data, especially social network data, existing deep generative models usually don't work: these models can not learn underlying distributions of social network data effectively. In order to address this problem, we construct a deep generative model, using architectures and techniques that prove to be effective for modelling network data in practice. Experimental results show that our model can successfully learn the underlying distribution from the social network dataset, and generate reasonable nodes, which can be altered by varying latent variables. This provides us a way to study social network data in the same way we study image data.
{"title":"Generative Model for Node Generation","authors":"Boyu Zhang, Xin Wang, Kai Liu","doi":"10.1145/3404555.3404599","DOIUrl":"https://doi.org/10.1145/3404555.3404599","url":null,"abstract":"We present a generative model applied to graph-structured data for node generation by incorporating the graph convolutional architecture and semi-supervised learning with variational auto-encoder. This idea is motivated by successful applications of deep generative models for images and speeches. However, when applied to graph-structured data, especially social network data, existing deep generative models usually don't work: these models can not learn underlying distributions of social network data effectively. In order to address this problem, we construct a deep generative model, using architectures and techniques that prove to be effective for modelling network data in practice. Experimental results show that our model can successfully learn the underlying distribution from the social network dataset, and generate reasonable nodes, which can be altered by varying latent variables. This provides us a way to study social network data in the same way we study image data.","PeriodicalId":220526,"journal":{"name":"Proceedings of the 2020 6th International Conference on Computing and Artificial Intelligence","volume":"90 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131200864","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Deepfake techniques has made face swapping in video easy to use. Nowadays, the spread of Deepfake videos over networks is concerned worldwide. This work proposes an approach to more accurate and robust detection of them. Since artifacts left by Deepfake tools can be largely categorized into two classes of different levels, i.e. semantic and noise level, we adopt a two-stream convolutional neural network (CNN) to capture the 2-level features concurrently. Xception network is trained only as the first stream to detect semantic anomalies such as the editing artifacts around face contour, detail missing, and geometric inconsistence in eyes. Meanwhile, the 2nd stream, which contain the constrained convolution filter and median filter, is designed to capture the tampering traces in local noises. By concatenating the 2-level features learned from the both streams, our method obtains very comprehensive knowledge about the existence of face swapping. The experimental results have shown its advantage over the existing methods on both the accuracy and robustness.
{"title":"Detecting Deepfake Video by Learning Two-Level Features with Two-Stream Convolutional Neural Network","authors":"Zheng Zhao, Penghui Wang, W. Lu","doi":"10.1145/3404555.3404564","DOIUrl":"https://doi.org/10.1145/3404555.3404564","url":null,"abstract":"Deepfake techniques has made face swapping in video easy to use. Nowadays, the spread of Deepfake videos over networks is concerned worldwide. This work proposes an approach to more accurate and robust detection of them. Since artifacts left by Deepfake tools can be largely categorized into two classes of different levels, i.e. semantic and noise level, we adopt a two-stream convolutional neural network (CNN) to capture the 2-level features concurrently. Xception network is trained only as the first stream to detect semantic anomalies such as the editing artifacts around face contour, detail missing, and geometric inconsistence in eyes. Meanwhile, the 2nd stream, which contain the constrained convolution filter and median filter, is designed to capture the tampering traces in local noises. By concatenating the 2-level features learned from the both streams, our method obtains very comprehensive knowledge about the existence of face swapping. The experimental results have shown its advantage over the existing methods on both the accuracy and robustness.","PeriodicalId":220526,"journal":{"name":"Proceedings of the 2020 6th International Conference on Computing and Artificial Intelligence","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131222226","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lei Geng, Fengfeng Yan, Zhitao Xiao, Fang Zhang, Yanbei Liu
In this paper, in order to more efficiently verify digital-display temperature and humidity instruments and better evaluate the quality of digital-display temperature and humidity instruments, we propose a new recognition method of digital-display temperature and humidity instrument based on YOLOv3 and character structure clustering. First, the screen region of digitaldisplay temperature and humidity instrument contains all valid characters, so we define the smallest bounding rectangle region of the screen region as the region of interest. We extract the region of interest through YOLOv3-tiny neural network. Then we use YOLOv3 neural network to detect characters on the region of interest. Finally, according to the intra-class correlation of characters, we use character structure clustering to obtain temperature and humidity values. In addition, in this paper, we verify the effectiveness of this method through experiments.
{"title":"Digital-Display Temperature and Humidity Instrument Recognition Based on YOLOv3 and Character Structure Clustering","authors":"Lei Geng, Fengfeng Yan, Zhitao Xiao, Fang Zhang, Yanbei Liu","doi":"10.1145/3404555.3404623","DOIUrl":"https://doi.org/10.1145/3404555.3404623","url":null,"abstract":"In this paper, in order to more efficiently verify digital-display temperature and humidity instruments and better evaluate the quality of digital-display temperature and humidity instruments, we propose a new recognition method of digital-display temperature and humidity instrument based on YOLOv3 and character structure clustering. First, the screen region of digitaldisplay temperature and humidity instrument contains all valid characters, so we define the smallest bounding rectangle region of the screen region as the region of interest. We extract the region of interest through YOLOv3-tiny neural network. Then we use YOLOv3 neural network to detect characters on the region of interest. Finally, according to the intra-class correlation of characters, we use character structure clustering to obtain temperature and humidity values. In addition, in this paper, we verify the effectiveness of this method through experiments.","PeriodicalId":220526,"journal":{"name":"Proceedings of the 2020 6th International Conference on Computing and Artificial Intelligence","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125914362","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ningxin Liang, W. Xu, Chengfang Luo, Wenxiong Kang
State-of-the-art deep neural network-based speaker recognition systems tend to follow the paradigm of speech feature extraction and then the speaker classifier training, namely "divide and conquer" approaches. These methods usually rely on fixed, handcrafted features such as Mel frequency cepstral coefficients (MFCCs) to preprocess the waveform before the classification pipeline. In this paper, inspired by the success and promising work to model a system directly from the raw speech signal for applications such as audio speech recognition, anti-spoofing and emotion recognition, we present an end-to-end speaker recognition system, combining front-end raw waveform feature extractor, back-end speaker embedding classifier and angle-based loss optimizer. Specifically, this means that the proposed frontend raw waveform feature extractor builds on a trainable alternative for MFCCs without modification of the acoustic model. And we will detail the superiority of the raw waveform feature extractor, namely utilizing the time convolution layer to reduce temporal variations aiming to adaptively learn a front-end speech feature representation by supervised training together with the rest of classification model. Our experiments, conducted on CSTR VCTK Corpus dataset, demonstrate that the proposed end-to-end speaker recognition system can achieve state-of-the-art performance compared to baseline models.
{"title":"Learning the Front-End Speech Feature with Raw Waveform for End-to-End Speaker Recognition","authors":"Ningxin Liang, W. Xu, Chengfang Luo, Wenxiong Kang","doi":"10.1145/3404555.3404571","DOIUrl":"https://doi.org/10.1145/3404555.3404571","url":null,"abstract":"State-of-the-art deep neural network-based speaker recognition systems tend to follow the paradigm of speech feature extraction and then the speaker classifier training, namely \"divide and conquer\" approaches. These methods usually rely on fixed, handcrafted features such as Mel frequency cepstral coefficients (MFCCs) to preprocess the waveform before the classification pipeline. In this paper, inspired by the success and promising work to model a system directly from the raw speech signal for applications such as audio speech recognition, anti-spoofing and emotion recognition, we present an end-to-end speaker recognition system, combining front-end raw waveform feature extractor, back-end speaker embedding classifier and angle-based loss optimizer. Specifically, this means that the proposed frontend raw waveform feature extractor builds on a trainable alternative for MFCCs without modification of the acoustic model. And we will detail the superiority of the raw waveform feature extractor, namely utilizing the time convolution layer to reduce temporal variations aiming to adaptively learn a front-end speech feature representation by supervised training together with the rest of classification model. Our experiments, conducted on CSTR VCTK Corpus dataset, demonstrate that the proposed end-to-end speaker recognition system can achieve state-of-the-art performance compared to baseline models.","PeriodicalId":220526,"journal":{"name":"Proceedings of the 2020 6th International Conference on Computing and Artificial Intelligence","volume":"256 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121895225","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}