Pub Date : 2021-08-19DOI: 10.1109/RIVF51545.2021.9642078
V. Pham, Thanh-Hai Tran, Hai Vu
Egocentric vision is an emerging field of computer vision characterized by the acquisition video from the first person perspective. Particularly, for evaluating upper extremity rehabilitation, egocentric vision offers the ability to quantitatively measure the function of hands used in physical-based exercises. For such applications, hand detection and tracking are the first requirement. In this work, we develop a fully automatic tracking by detection pipeline that firstly extracts hands positions and then tracks hands in consecutive frames. The proposed framework consists of state of the art detectors such as RCNN and YOLO family models coupled with advanced trackers (e.g., SORT and DeepSORT) for tracking task. This paper explores how performance of the stand alone object detection algorithms correlates with overall performance of a tracking by detection system. The experimental results show that detection highly impacts the overall performance. Moreover, this work also proves that the use of visual descriptors in the tracking stage can reduce the number of identity switches and thereby increase potential of the whole system. We also present challenges for new egocentric hand tracking dataset for future works.
{"title":"Detection and tracking hand from FPV: benchmarks and challenges on rehabilitation exercises dataset","authors":"V. Pham, Thanh-Hai Tran, Hai Vu","doi":"10.1109/RIVF51545.2021.9642078","DOIUrl":"https://doi.org/10.1109/RIVF51545.2021.9642078","url":null,"abstract":"Egocentric vision is an emerging field of computer vision characterized by the acquisition video from the first person perspective. Particularly, for evaluating upper extremity rehabilitation, egocentric vision offers the ability to quantitatively measure the function of hands used in physical-based exercises. For such applications, hand detection and tracking are the first requirement. In this work, we develop a fully automatic tracking by detection pipeline that firstly extracts hands positions and then tracks hands in consecutive frames. The proposed framework consists of state of the art detectors such as RCNN and YOLO family models coupled with advanced trackers (e.g., SORT and DeepSORT) for tracking task. This paper explores how performance of the stand alone object detection algorithms correlates with overall performance of a tracking by detection system. The experimental results show that detection highly impacts the overall performance. Moreover, this work also proves that the use of visual descriptors in the tracking stage can reduce the number of identity switches and thereby increase potential of the whole system. We also present challenges for new egocentric hand tracking dataset for future works.","PeriodicalId":6860,"journal":{"name":"2021 RIVF International Conference on Computing and Communication Technologies (RIVF)","volume":"53 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2021-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89449248","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-19DOI: 10.1109/RIVF51545.2021.9642080
V. D. Do, L. Ngo, D. Mai
Possibilistic Fuzzy c-means (PFCM) algorithm is a robustness clustering algorithm that combines two algorithms, Fuzzy c-means (FCM) and Possibilistic c-means (PCM). It addresses the weakness of FCM in handling noise sensitivity and the weakness of PCM within the case of coincidence clusters. However, PFCM works inefficiently when the input data is nonlinear separable. To solve this problem, kernel methods have been introduced into possibilistic fuzzy c-means clustering (KPFCM). KPFCM can address noises or outliers data better than PFCM. But KPFCM suffers from a common drawback of clustering algorithms that may be trapped in local minimum which results in not good results. Recently, Cuckoo search (CS) based clustering has proved to achieve fascinating results. It can achieve the best global solution compared to most other metaheuristics. In this paper, we propose a hybrid method encompassing KPFCM and Cuckoo search algorithm to form the proposed KPFCM-CSA. The experimental results indicate that the proposed method outperformed various well-known recent clustering algorithms in terms of clustering quality.
{"title":"A hybrid kernel-based possibilistic fuzzy c-means clustering and cuckoo search algorithm","authors":"V. D. Do, L. Ngo, D. Mai","doi":"10.1109/RIVF51545.2021.9642080","DOIUrl":"https://doi.org/10.1109/RIVF51545.2021.9642080","url":null,"abstract":"Possibilistic Fuzzy c-means (PFCM) algorithm is a robustness clustering algorithm that combines two algorithms, Fuzzy c-means (FCM) and Possibilistic c-means (PCM). It addresses the weakness of FCM in handling noise sensitivity and the weakness of PCM within the case of coincidence clusters. However, PFCM works inefficiently when the input data is nonlinear separable. To solve this problem, kernel methods have been introduced into possibilistic fuzzy c-means clustering (KPFCM). KPFCM can address noises or outliers data better than PFCM. But KPFCM suffers from a common drawback of clustering algorithms that may be trapped in local minimum which results in not good results. Recently, Cuckoo search (CS) based clustering has proved to achieve fascinating results. It can achieve the best global solution compared to most other metaheuristics. In this paper, we propose a hybrid method encompassing KPFCM and Cuckoo search algorithm to form the proposed KPFCM-CSA. The experimental results indicate that the proposed method outperformed various well-known recent clustering algorithms in terms of clustering quality.","PeriodicalId":6860,"journal":{"name":"2021 RIVF International Conference on Computing and Communication Technologies (RIVF)","volume":"1 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2021-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83654465","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-19DOI: 10.1109/RIVF51545.2021.9642084
Van-Tan Bui, Phuong-Thai Nguyen
Cross-lingual semantic word similarity (CLSW) ad- dresses the task of estimating the semantic distance between two words across languages. This task is an important component in many natural language processing applications. Recent studies have proposed several effective CLSW models for resource- rich language pairs such as English-German, English-French. However, This task has not been effectively addressed for language pairs consisting of Vietnamese and another one. In this paper, we propose a neural network model that exploits cross- lingual lexical resources to learn high-quality cross-lingual word embedding models. Since our neural network model is language- independent, it can learn a truly multilingual space. Furthermore, we introduce a novel cross-lingual semantic word similarity measurement method based on Word Embeddings and Word Definitions (WEWD). Last but not least, we introduce a standard Vietnamese-English dataset for the cross-lingual semantic word similarity measurement task (VESim-1000). The experimental results show that our proposed method is more robust and outperforms current state-of-the-art methods that are only based on word embeddings or lexical resources.
{"title":"WEWD: A Combined Approach for Measuring Cross-lingual Semantic Word Similarity Based on Word Embeddings and Word Definitions","authors":"Van-Tan Bui, Phuong-Thai Nguyen","doi":"10.1109/RIVF51545.2021.9642084","DOIUrl":"https://doi.org/10.1109/RIVF51545.2021.9642084","url":null,"abstract":"Cross-lingual semantic word similarity (CLSW) ad- dresses the task of estimating the semantic distance between two words across languages. This task is an important component in many natural language processing applications. Recent studies have proposed several effective CLSW models for resource- rich language pairs such as English-German, English-French. However, This task has not been effectively addressed for language pairs consisting of Vietnamese and another one. In this paper, we propose a neural network model that exploits cross- lingual lexical resources to learn high-quality cross-lingual word embedding models. Since our neural network model is language- independent, it can learn a truly multilingual space. Furthermore, we introduce a novel cross-lingual semantic word similarity measurement method based on Word Embeddings and Word Definitions (WEWD). Last but not least, we introduce a standard Vietnamese-English dataset for the cross-lingual semantic word similarity measurement task (VESim-1000). The experimental results show that our proposed method is more robust and outperforms current state-of-the-art methods that are only based on word embeddings or lexical resources.","PeriodicalId":6860,"journal":{"name":"2021 RIVF International Conference on Computing and Communication Technologies (RIVF)","volume":"94 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2021-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83914139","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-19DOI: 10.1109/RIVF51545.2021.9642120
Van Quan Nguyen, V. H. Nguyen, Nhien-An Le-Khac, V. Cao
In a previous work, a clustering-based method had been incorporated with the latent feature space of an autoencoder to discover sub-classes of normal data for anomaly detection. However, the work has the limitation in manually setting up the numbers of clusters in the normal training data. Finding a proper number of clusters in datasets is often ambiguous and highly depends on the characteristics of datasets. This paper proposes a novel data-driven empirical approach for automatically identifying the number of normal sub-classes (clusters) without human intervention. This clustering-based method, afterward, is co-trained with an autoencoder to automatically discover the appreciated number of clusters of normal training data in the middle hidden layer of the autoencoder. The resulting clustering centers are then used to identify anomalies in querying data. Our approach is tested on four scenarios from the CTU13 datasets, and the experimental results show that the proposed model often perform better than those of the model in the previous work on almost scenarios.
{"title":"Automatically Estimate Clusters in Autoencoder-based Clustering Model for Anomaly Detection","authors":"Van Quan Nguyen, V. H. Nguyen, Nhien-An Le-Khac, V. Cao","doi":"10.1109/RIVF51545.2021.9642120","DOIUrl":"https://doi.org/10.1109/RIVF51545.2021.9642120","url":null,"abstract":"In a previous work, a clustering-based method had been incorporated with the latent feature space of an autoencoder to discover sub-classes of normal data for anomaly detection. However, the work has the limitation in manually setting up the numbers of clusters in the normal training data. Finding a proper number of clusters in datasets is often ambiguous and highly depends on the characteristics of datasets. This paper proposes a novel data-driven empirical approach for automatically identifying the number of normal sub-classes (clusters) without human intervention. This clustering-based method, afterward, is co-trained with an autoencoder to automatically discover the appreciated number of clusters of normal training data in the middle hidden layer of the autoencoder. The resulting clustering centers are then used to identify anomalies in querying data. Our approach is tested on four scenarios from the CTU13 datasets, and the experimental results show that the proposed model often perform better than those of the model in the previous work on almost scenarios.","PeriodicalId":6860,"journal":{"name":"2021 RIVF International Conference on Computing and Communication Technologies (RIVF)","volume":"1 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2021-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79946184","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-19DOI: 10.1109/RIVF51545.2021.9642101
The-Duong Do, Hong Nhung-Nguyen, A. Pham, Yong-Hwa Kim
Research in autonomous driving systems technology, which is considered as a leader of the fourth industrial revolution, is defining a new era of mobility. Due to its safety and reliability in real-time traffic environments, radar, one of the most important components utilized in driverless vehicles, is actively carried out. For automotive radar systems on the road, each road environment produces superfluous echoes known as clutter, and the magnitude distribution of received radar signal varies reliance on road structures, leading to an increasing requirement for classifying the road environment and adopting a suitable target detection algorithm for each road environment characteristic. However, the classification of road environments using super-vised algorithms such as feedforward neural networks (FNN) or convolutional neural networks (CNN) requires a massive amount of training data, which is a popular impediment in deep learning. In order to tackle the problem of shortage of labeled data, in this study, we propose a semi-supervised GAN approach to recognize different road environments with auto-motive frequency-modulated continuous-wave (FMCW) radar systems. The proposed model achieves a substantial performance improvement over other existing methods, especially when only a small proportion of the training data are labeled, demonstrating the potential of the proposed Semi-GAN-based method for the challenging task of various road environments recognition.
{"title":"Semi-Supervised GAN for Road Structure Recognition of Automotive FMCW Radar Systems","authors":"The-Duong Do, Hong Nhung-Nguyen, A. Pham, Yong-Hwa Kim","doi":"10.1109/RIVF51545.2021.9642101","DOIUrl":"https://doi.org/10.1109/RIVF51545.2021.9642101","url":null,"abstract":"Research in autonomous driving systems technology, which is considered as a leader of the fourth industrial revolution, is defining a new era of mobility. Due to its safety and reliability in real-time traffic environments, radar, one of the most important components utilized in driverless vehicles, is actively carried out. For automotive radar systems on the road, each road environment produces superfluous echoes known as clutter, and the magnitude distribution of received radar signal varies reliance on road structures, leading to an increasing requirement for classifying the road environment and adopting a suitable target detection algorithm for each road environment characteristic. However, the classification of road environments using super-vised algorithms such as feedforward neural networks (FNN) or convolutional neural networks (CNN) requires a massive amount of training data, which is a popular impediment in deep learning. In order to tackle the problem of shortage of labeled data, in this study, we propose a semi-supervised GAN approach to recognize different road environments with auto-motive frequency-modulated continuous-wave (FMCW) radar systems. The proposed model achieves a substantial performance improvement over other existing methods, especially when only a small proportion of the training data are labeled, demonstrating the potential of the proposed Semi-GAN-based method for the challenging task of various road environments recognition.","PeriodicalId":6860,"journal":{"name":"2021 RIVF International Conference on Computing and Communication Technologies (RIVF)","volume":"82 3 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2021-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77526724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-19DOI: 10.1109/RIVF51545.2021.9642083
Duy Nguyen, Tuan-Anh Nguyen, Xuan-Chung Nguyen
In the information age, how to quickly obtain information and extract key information from massive and complex re-sources has become challenging. Extracting information from scanned or captured document is one of the most demanding process in many areas such as finance, accounting, and taxation. The current achievement in the computer vision field has shown a substantial improvement in the field of Optical Character Recognition (OCR), including text detection and recognition tasks. However, there are two challenges for current OCR. The first one is the quality of the input data which is captured by mobile phone. The quality is greatly affected by external factors like light condition, dynamic environment or blurry content. Secondly, Key Information Extraction (KIE) from documents, which is a downstream task of OCR, had been a largely under explored domain because the input documents have not only textual features extracting from OCR systems but also semantic visual features which are not fully utilized and play a critical role in KIE. In this paper, we propose an end-to-end system based on several state-of-the-art models from both computer vision and natural language processing areas to deal with the Mobile captured receipts OCR (MC-OCR) challenge, including two tasks: (1) evaluating the quality of the captured receipt, and (2) recognizing required fields of the receipt. Our code is publicly available at https://github.com/ndcuong9/MC_OCR
在信息时代,如何从海量复杂的资源中快速获取信息并提取关键信息已成为一项挑战。在金融、会计和税务等许多领域,从扫描或捕获的文档中提取信息是要求最高的过程之一。当前计算机视觉领域的成就已经在光学字符识别(OCR)领域取得了实质性的进步,包括文本检测和识别任务。然而,当前的OCR存在两个挑战。第一个是由手机捕获的输入数据的质量。画质受光线条件、动态环境或内容模糊等外部因素影响很大。其次,文档关键信息提取(Key Information Extraction, KIE)是OCR的下游任务,由于输入文档中既有从OCR系统中提取出来的文本特征,也有语义视觉特征,这些特征没有得到充分利用,在关键信息提取中起着至关重要的作用,因此一直是一个研究较少的领域。在本文中,我们提出了一个基于计算机视觉和自然语言处理领域的几个最先进的模型的端到端系统来处理移动捕获收据OCR (MC-OCR)挑战,包括两个任务:(1)评估捕获收据的质量;(2)识别收据的必要字段。我们的代码可以在https://github.com/ndcuong9/MC_OCR上公开获得
{"title":"MC-OCR Challenge 2021: End-to-end system to extract key information from Vietnamese Receipts","authors":"Duy Nguyen, Tuan-Anh Nguyen, Xuan-Chung Nguyen","doi":"10.1109/RIVF51545.2021.9642083","DOIUrl":"https://doi.org/10.1109/RIVF51545.2021.9642083","url":null,"abstract":"In the information age, how to quickly obtain information and extract key information from massive and complex re-sources has become challenging. Extracting information from scanned or captured document is one of the most demanding process in many areas such as finance, accounting, and taxation. The current achievement in the computer vision field has shown a substantial improvement in the field of Optical Character Recognition (OCR), including text detection and recognition tasks. However, there are two challenges for current OCR. The first one is the quality of the input data which is captured by mobile phone. The quality is greatly affected by external factors like light condition, dynamic environment or blurry content. Secondly, Key Information Extraction (KIE) from documents, which is a downstream task of OCR, had been a largely under explored domain because the input documents have not only textual features extracting from OCR systems but also semantic visual features which are not fully utilized and play a critical role in KIE. In this paper, we propose an end-to-end system based on several state-of-the-art models from both computer vision and natural language processing areas to deal with the Mobile captured receipts OCR (MC-OCR) challenge, including two tasks: (1) evaluating the quality of the captured receipt, and (2) recognizing required fields of the receipt. Our code is publicly available at https://github.com/ndcuong9/MC_OCR","PeriodicalId":6860,"journal":{"name":"2021 RIVF International Conference on Computing and Communication Technologies (RIVF)","volume":"36 1","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2021-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87267721","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-19DOI: 10.1109/RIVF51545.2021.9642107
Quang-Duy Nguyen, C. Roussey, P. Bellot, J. Chanet
The Internet of Things is an ideal world in which all computing devices from all over the world connect and exchange data through the Internet. This new scenario demands context-aware systems to evolve with new characteristics; thus, brings new challenges for system developers in system development. While addressing these challenges, this paper presents a system design approach based on a stack of 16 services specialized for context-aware systems. The approach enables system developers to focus more on services than hardware and software components. The case study of a smart irrigation context-aware system, also presented in this paper, is an example of using this design approach in practice.
{"title":"Stack of Services for Context-Aware Systems: An Internet-Of-Things System Design Approach","authors":"Quang-Duy Nguyen, C. Roussey, P. Bellot, J. Chanet","doi":"10.1109/RIVF51545.2021.9642107","DOIUrl":"https://doi.org/10.1109/RIVF51545.2021.9642107","url":null,"abstract":"The Internet of Things is an ideal world in which all computing devices from all over the world connect and exchange data through the Internet. This new scenario demands context-aware systems to evolve with new characteristics; thus, brings new challenges for system developers in system development. While addressing these challenges, this paper presents a system design approach based on a stack of 16 services specialized for context-aware systems. The approach enables system developers to focus more on services than hardware and software components. The case study of a smart irrigation context-aware system, also presented in this paper, is an example of using this design approach in practice.","PeriodicalId":6860,"journal":{"name":"2021 RIVF International Conference on Computing and Communication Technologies (RIVF)","volume":"80 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2021-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90502364","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-19DOI: 10.1109/RIVF51545.2021.9642146
T. Tran, H. Hoang, Phuong Hoai Dang, M. Riveill
Sentiment analysis or opinion mining used to capture the community’s attitude who have experienced the specific service/product. Sentiment analysis usually concentrates to classify the opinion of whole document or sentence. However, in most comments, users often express their opinions on different aspects of the mentioned entity rather than express general sentiments on entire document. In this case, using aspect-based sentiment analysis (ABSA) is a solution. ABSA emphases on extracting and synthesizing sentiments on particular aspects of entities in opinion text. The previous studies have difficulty working with aspect extraction and sentiment polarity classification in multiple domains of review. We offer an innovative deep learning approach with the integrated construction of bidirectional Long Short Term Memory (BiLSTM) and Convolutional Neural Network (CNN) for multidomain ABSA in this article. Our system finished the following tasks: domain classification, aspect extraction and opinion determination of aspect in the document. Besides applying GloVe word embedding for input sentences from mixed Laptop_Restaurant domain of the SemEval 2016 dataset, we also use the additional layer of POS to pick out the word morphological attributes before feeding to the CNN_BiLSTM architecture to enhance the flexibility and precision of our suggested model. Through experiment, we found that our proposed model has performed the above mentioned tasks of domain classification, aspect and sentiment extraction concurrently on a mixed domain dataset and achieved the positive results compared to previous models that were performed only on separated domain dataset.
{"title":"Multidomain Supervised Aspect-based Sentiment Analysis using CNN_Bidirectional LSTM model","authors":"T. Tran, H. Hoang, Phuong Hoai Dang, M. Riveill","doi":"10.1109/RIVF51545.2021.9642146","DOIUrl":"https://doi.org/10.1109/RIVF51545.2021.9642146","url":null,"abstract":"Sentiment analysis or opinion mining used to capture the community’s attitude who have experienced the specific service/product. Sentiment analysis usually concentrates to classify the opinion of whole document or sentence. However, in most comments, users often express their opinions on different aspects of the mentioned entity rather than express general sentiments on entire document. In this case, using aspect-based sentiment analysis (ABSA) is a solution. ABSA emphases on extracting and synthesizing sentiments on particular aspects of entities in opinion text. The previous studies have difficulty working with aspect extraction and sentiment polarity classification in multiple domains of review. We offer an innovative deep learning approach with the integrated construction of bidirectional Long Short Term Memory (BiLSTM) and Convolutional Neural Network (CNN) for multidomain ABSA in this article. Our system finished the following tasks: domain classification, aspect extraction and opinion determination of aspect in the document. Besides applying GloVe word embedding for input sentences from mixed Laptop_Restaurant domain of the SemEval 2016 dataset, we also use the additional layer of POS to pick out the word morphological attributes before feeding to the CNN_BiLSTM architecture to enhance the flexibility and precision of our suggested model. Through experiment, we found that our proposed model has performed the above mentioned tasks of domain classification, aspect and sentiment extraction concurrently on a mixed domain dataset and achieved the positive results compared to previous models that were performed only on separated domain dataset.","PeriodicalId":6860,"journal":{"name":"2021 RIVF International Conference on Computing and Communication Technologies (RIVF)","volume":"3 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2021-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89326040","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-19DOI: 10.1109/RIVF51545.2021.9642072
Q. C. Truong, B. Gaudou, Minh Van Danh, N. Huynh, A. Drogoul, P. Taillandier
The rice-shrimp farming system is considered as a sustainable and beneficial model for the environment. However, the area of rice-shrimp was increasingly narrowed due to the trend of converting from rice to aquaculture by economic reasons. This paper aims to propose a medium scale land use change model for understanding the land use decision of farmers in adaptation to the environment and climate change. The model integrates a land-use decision making process based on multi-criteria selection where the main factors are land suitability, land convertibility, land use situation of neighbors, and profitability of land use patterns. Concerning the land use data, we used historical land use map in 2005, 2015 and 2019. Shrimp cultivation regions was completed by Landsat satellite image processing. The model has been calibrated by rice-shrimp map in 2015 and has been verified with the rice – shrimp map in 2019 of the My Xuyen district, Soc Trang province, Vietnam. The simulated results show that the rice-shrimp area was increasingly narrowed and has been converted to aquaculture land. In addition, the model tends to show that in a scenario of sea level rise of 15 cm in 2030, the share of rice-shrimp and shrimp tends to rise sharply, which is an important lesson for developing complex adaptive strategies of farmers.
{"title":"A land-use change model to study climate change adaptation strategies in the Mekong Delta","authors":"Q. C. Truong, B. Gaudou, Minh Van Danh, N. Huynh, A. Drogoul, P. Taillandier","doi":"10.1109/RIVF51545.2021.9642072","DOIUrl":"https://doi.org/10.1109/RIVF51545.2021.9642072","url":null,"abstract":"The rice-shrimp farming system is considered as a sustainable and beneficial model for the environment. However, the area of rice-shrimp was increasingly narrowed due to the trend of converting from rice to aquaculture by economic reasons. This paper aims to propose a medium scale land use change model for understanding the land use decision of farmers in adaptation to the environment and climate change. The model integrates a land-use decision making process based on multi-criteria selection where the main factors are land suitability, land convertibility, land use situation of neighbors, and profitability of land use patterns. Concerning the land use data, we used historical land use map in 2005, 2015 and 2019. Shrimp cultivation regions was completed by Landsat satellite image processing. The model has been calibrated by rice-shrimp map in 2015 and has been verified with the rice – shrimp map in 2019 of the My Xuyen district, Soc Trang province, Vietnam. The simulated results show that the rice-shrimp area was increasingly narrowed and has been converted to aquaculture land. In addition, the model tends to show that in a scenario of sea level rise of 15 cm in 2030, the share of rice-shrimp and shrimp tends to rise sharply, which is an important lesson for developing complex adaptive strategies of farmers.","PeriodicalId":6860,"journal":{"name":"2021 RIVF International Conference on Computing and Communication Technologies (RIVF)","volume":"102 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2021-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80437663","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-19DOI: 10.1109/RIVF51545.2021.9642094
Aayasha Palikhe, Longzhuang Li, Feng Tian, Dulal C. Kar, Ning Zhang, Wen Zhang
Today mobile phones provide a wide range of applications that make our daily life easy. With popularity, smartphones have become a target for cybercrime where malicious apps are developed to acquire sensitive information or corrupt data. To mitigate this issue and to improve the security in mobile devices, different techniques have been used. These techniques can be broadly classified as static, dynamic and hybrid approaches. In this paper, a static-based model MalDuoNet is proposed to detect Android malwares, which uses a DualNet framework to analyze the features from the API calls. In the MalDuoNet model, one sub-network is focused to learn the features relevant to malicious behavior and the other sub-network is focused to learn the features in general. Thus it enables the model to learn complementary features which in turn helps get richer features for analysis. Then the features from the two sub-networks are combined in the final fused classifier for the final classification. In addition, each of the feature extractors has a separate classifier so that each sub-network can optimize its performance separately. The experimental results demonstrate that the MalDuoNet model outperforms the two baseline models with single network.
{"title":"MalDuoNet: A DualNet Framework to Detect Android Malware","authors":"Aayasha Palikhe, Longzhuang Li, Feng Tian, Dulal C. Kar, Ning Zhang, Wen Zhang","doi":"10.1109/RIVF51545.2021.9642094","DOIUrl":"https://doi.org/10.1109/RIVF51545.2021.9642094","url":null,"abstract":"Today mobile phones provide a wide range of applications that make our daily life easy. With popularity, smartphones have become a target for cybercrime where malicious apps are developed to acquire sensitive information or corrupt data. To mitigate this issue and to improve the security in mobile devices, different techniques have been used. These techniques can be broadly classified as static, dynamic and hybrid approaches. In this paper, a static-based model MalDuoNet is proposed to detect Android malwares, which uses a DualNet framework to analyze the features from the API calls. In the MalDuoNet model, one sub-network is focused to learn the features relevant to malicious behavior and the other sub-network is focused to learn the features in general. Thus it enables the model to learn complementary features which in turn helps get richer features for analysis. Then the features from the two sub-networks are combined in the final fused classifier for the final classification. In addition, each of the feature extractors has a separate classifier so that each sub-network can optimize its performance separately. The experimental results demonstrate that the MalDuoNet model outperforms the two baseline models with single network.","PeriodicalId":6860,"journal":{"name":"2021 RIVF International Conference on Computing and Communication Technologies (RIVF)","volume":"80 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2021-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80031441","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}