Pub Date : 2021-08-19DOI: 10.1109/rivf51545.2021.9642109
{"title":"[RIVF 2021 Front cover]","authors":"","doi":"10.1109/rivf51545.2021.9642109","DOIUrl":"https://doi.org/10.1109/rivf51545.2021.9642109","url":null,"abstract":"","PeriodicalId":6860,"journal":{"name":"2021 RIVF International Conference on Computing and Communication Technologies (RIVF)","volume":"4 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74882164","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-19DOI: 10.1109/RIVF51545.2021.9642131
Chi-Hieu Nguyen, Khanh-Van Nguyen
In this paper, we study the problem of gathering data from large-scale wireless sensor networks using multiple unmanned air vehicles (UAVs) to gather data at designated rendezvouses, where the goal is to maximize the network lifetime. Previous proposals often consider a practical approach where the problem of determining a data gathering scheme is decomposed into 2 sub-problems: i) partitioning the networks into clusters for determining the rendezvouses as these obtained cluster heads; and ii) determining the paths for a set of a given number of UAVs to come gathering data at these rendezvouses which have been harvesting data within each local clusters, respectively. We try to deal with this as a whole optimization problem, expecting a significant increase in computation complexity which would bring new challenge in creating practical solutions for largescale WSNs. We introduce two alternatives mixed-integer linear programming (MILP) formulations and we show that our best model could solve the problem instances optimally with up to 50 sensor nodes in less than 30 minutes. Next, we propose a heuristic idea to reduce the number of variables in implementing the 3-index model to effectively handle larger-scale networks with size in hundreds. The experiment results show that our heuristic approach significantly prolongs the network lifetime compared to existing most efficient proposals.
{"title":"Multi-UAV Assisted Data Gathering in WSN: A MILP Approach For Optimizing Network Lifetime","authors":"Chi-Hieu Nguyen, Khanh-Van Nguyen","doi":"10.1109/RIVF51545.2021.9642131","DOIUrl":"https://doi.org/10.1109/RIVF51545.2021.9642131","url":null,"abstract":"In this paper, we study the problem of gathering data from large-scale wireless sensor networks using multiple unmanned air vehicles (UAVs) to gather data at designated rendezvouses, where the goal is to maximize the network lifetime. Previous proposals often consider a practical approach where the problem of determining a data gathering scheme is decomposed into 2 sub-problems: i) partitioning the networks into clusters for determining the rendezvouses as these obtained cluster heads; and ii) determining the paths for a set of a given number of UAVs to come gathering data at these rendezvouses which have been harvesting data within each local clusters, respectively. We try to deal with this as a whole optimization problem, expecting a significant increase in computation complexity which would bring new challenge in creating practical solutions for largescale WSNs. We introduce two alternatives mixed-integer linear programming (MILP) formulations and we show that our best model could solve the problem instances optimally with up to 50 sensor nodes in less than 30 minutes. Next, we propose a heuristic idea to reduce the number of variables in implementing the 3-index model to effectively handle larger-scale networks with size in hundreds. The experiment results show that our heuristic approach significantly prolongs the network lifetime compared to existing most efficient proposals.","PeriodicalId":6860,"journal":{"name":"2021 RIVF International Conference on Computing and Communication Technologies (RIVF)","volume":"5 1","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2021-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75348726","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-19DOI: 10.1109/RIVF51545.2021.9642089
A. Tran, T. Luong, Cong-Chieu Ha, Duc-Tho Hoang, Thi-Luong Tran
Cloud computing plays an important role in many applications today. There is a lot of machine learning as a service that provides models for users’ prediction online. However, in many problems which involve healthcare or finances, the privacy of the data that sends from users to the cloud server needs to be considered. Machine learning as a service application does not only require accurate predictions but also ensures data privacy and security. In this paper, we present a novel secure protocol that ensures to compute a scalar product of two real number vectors without revealing the origin of themselves. The scalar product is the most common operation that used in the deep neural network so that our proposed protocol can be used to allow a data owner to send her data to a cloud service that hosts a deep model to get a prediction of input data. We show that the cloud service is capable of applying the neural network to make predictions without knowledge of the user’s original data. We demonstrate our proposed protocol on an image benchmark dataset MNIST and an real life application dataset - COVID-19. The results show that our model can achieve 98.8% accuracy on MNIST and 95.02% on COVID-19 dataset with very simple network architecture and nearly no reduction in accuracy when compares with the original model. Moreover, the proposed system can make around 120000 predictions per hour on a single PC with low resources. Therefore, they allow high throughput, accurate, and private predictions.
{"title":"Secure Inference via Deep Learning as a Service without Privacy Leakage","authors":"A. Tran, T. Luong, Cong-Chieu Ha, Duc-Tho Hoang, Thi-Luong Tran","doi":"10.1109/RIVF51545.2021.9642089","DOIUrl":"https://doi.org/10.1109/RIVF51545.2021.9642089","url":null,"abstract":"Cloud computing plays an important role in many applications today. There is a lot of machine learning as a service that provides models for users’ prediction online. However, in many problems which involve healthcare or finances, the privacy of the data that sends from users to the cloud server needs to be considered. Machine learning as a service application does not only require accurate predictions but also ensures data privacy and security. In this paper, we present a novel secure protocol that ensures to compute a scalar product of two real number vectors without revealing the origin of themselves. The scalar product is the most common operation that used in the deep neural network so that our proposed protocol can be used to allow a data owner to send her data to a cloud service that hosts a deep model to get a prediction of input data. We show that the cloud service is capable of applying the neural network to make predictions without knowledge of the user’s original data. We demonstrate our proposed protocol on an image benchmark dataset MNIST and an real life application dataset - COVID-19. The results show that our model can achieve 98.8% accuracy on MNIST and 95.02% on COVID-19 dataset with very simple network architecture and nearly no reduction in accuracy when compares with the original model. Moreover, the proposed system can make around 120000 predictions per hour on a single PC with low resources. Therefore, they allow high throughput, accurate, and private predictions.","PeriodicalId":6860,"journal":{"name":"2021 RIVF International Conference on Computing and Communication Technologies (RIVF)","volume":"1 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2021-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84312723","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-19DOI: 10.1109/RIVF51545.2021.9642114
Cuong Nguyen, Phung Huynh
In binary classification problems, two classes of data seem to be different from each other. It is expected to be more complicated due to the clusters in each class also tend to be different. Traditional algorithms as Support Vector Machine (SVM), Twin Support Vector Machine (TSVM), or Least Square Twin Support Vector Machine (LSTSVM) cannot sufficiently exploit structural information with cluster granularity of the data, cause limitation in the ability to detect data trends. Structural Twin Support Vector Machine (S-TSVM) sufficiently exploits structural information with cluster granularity for learning a represented hyperplane. Therefore, the ability to describe the data of S-TSVM is better than that of TSVM and LSTSVM. However, for the datasets where each class consists of clusters of different trends, the S-TSVM’s ability to describe data seems restricted. Besides, the training time of S-TSVM has not been improved compared to TSVM. This paper proposes a new Weighted Least Square - Support Vector Machine (called WLS-SVM) for binary classification problems with a clusters-vs-class strategy. Experimental results show that WLS-SVM could describe the tendency of the distribution of cluster information. Furthermore, the WLS-SVM training time is faster than that of S-TSVM and TSVM, and the WLS-SVM accuracy is better than LSTSVM and TSVM in most cases.
{"title":"Weighted Least Square - Support Vector Machine","authors":"Cuong Nguyen, Phung Huynh","doi":"10.1109/RIVF51545.2021.9642114","DOIUrl":"https://doi.org/10.1109/RIVF51545.2021.9642114","url":null,"abstract":"In binary classification problems, two classes of data seem to be different from each other. It is expected to be more complicated due to the clusters in each class also tend to be different. Traditional algorithms as Support Vector Machine (SVM), Twin Support Vector Machine (TSVM), or Least Square Twin Support Vector Machine (LSTSVM) cannot sufficiently exploit structural information with cluster granularity of the data, cause limitation in the ability to detect data trends. Structural Twin Support Vector Machine (S-TSVM) sufficiently exploits structural information with cluster granularity for learning a represented hyperplane. Therefore, the ability to describe the data of S-TSVM is better than that of TSVM and LSTSVM. However, for the datasets where each class consists of clusters of different trends, the S-TSVM’s ability to describe data seems restricted. Besides, the training time of S-TSVM has not been improved compared to TSVM. This paper proposes a new Weighted Least Square - Support Vector Machine (called WLS-SVM) for binary classification problems with a clusters-vs-class strategy. Experimental results show that WLS-SVM could describe the tendency of the distribution of cluster information. Furthermore, the WLS-SVM training time is faster than that of S-TSVM and TSVM, and the WLS-SVM accuracy is better than LSTSVM and TSVM in most cases.","PeriodicalId":6860,"journal":{"name":"2021 RIVF International Conference on Computing and Communication Technologies (RIVF)","volume":"196 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2021-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85109551","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-19DOI: 10.1109/rivf51545.2021.9642152
{"title":"[RIVF 2021 Back cover]","authors":"","doi":"10.1109/rivf51545.2021.9642152","DOIUrl":"https://doi.org/10.1109/rivf51545.2021.9642152","url":null,"abstract":"","PeriodicalId":6860,"journal":{"name":"2021 RIVF International Conference on Computing and Communication Technologies (RIVF)","volume":"54 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85817606","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-19DOI: 10.1109/RIVF51545.2021.9642124
H. T. Le, Que X. Bui
Keyphrase extraction is a fundamental task in natural language processing. Its purpose is to generate a set of keyphrases representing the main idea of the input document. Keyphrase extraction can be used in several applications such as recommendation systems, plagiarism checking, text summarization, and text retrieval. In this paper, we propose an approach using PageRank and word features to compute keyphrases’ scores. Experimental results on SemEval 2010 dataset show that our method provides promising results compared to existing works in this field.
{"title":"Keyphrase Extraction Using PageRank and Word Features","authors":"H. T. Le, Que X. Bui","doi":"10.1109/RIVF51545.2021.9642124","DOIUrl":"https://doi.org/10.1109/RIVF51545.2021.9642124","url":null,"abstract":"Keyphrase extraction is a fundamental task in natural language processing. Its purpose is to generate a set of keyphrases representing the main idea of the input document. Keyphrase extraction can be used in several applications such as recommendation systems, plagiarism checking, text summarization, and text retrieval. In this paper, we propose an approach using PageRank and word features to compute keyphrases’ scores. Experimental results on SemEval 2010 dataset show that our method provides promising results compared to existing works in this field.","PeriodicalId":6860,"journal":{"name":"2021 RIVF International Conference on Computing and Communication Technologies (RIVF)","volume":"123 1","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2021-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76071836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-19DOI: 10.1109/RIVF51545.2021.9642073
David Wu, V. Nguyen, M. Minoux, Haï Tran
We address the problem of minimizing the energy cost generated by pumping operations for supplying an elevated water tanks from a low source reservoir of water distribution systems (WDS). Pumping operations can be activated by using simple ON-OFF trigger levels without advanced control system. A static optimal solution of simple trigger levels may not be robust when being applied to real world situation. In this paper, we propose an integer programming model using the idea of Additional Time Slots of [Quintiliani et. al., Water Resour Manage, 2019] to make the optimal solution more robust. In addition, computational results on real-world data with single and multiple pumps will be presented.
我们解决的问题是尽量减少抽水作业所产生的能源成本,以便从供水系统(WDS)的低源水库供应高架水箱。无需先进的控制系统,只需使用简单的ON-OFF触发器即可启动泵送操作。简单触发水平的静态最优解在应用于现实世界情况时可能并不健壮。在本文中,我们使用[Quintiliani et. al., Water Resour Manage, 2019]的附加时隙思想提出了一个整数规划模型,以使最优解更具鲁棒性。此外,还将介绍单泵和多泵实际数据的计算结果。
{"title":"An integer programming model for minimizing energy cost in water distribution system using trigger levels with additional time slots","authors":"David Wu, V. Nguyen, M. Minoux, Haï Tran","doi":"10.1109/RIVF51545.2021.9642073","DOIUrl":"https://doi.org/10.1109/RIVF51545.2021.9642073","url":null,"abstract":"We address the problem of minimizing the energy cost generated by pumping operations for supplying an elevated water tanks from a low source reservoir of water distribution systems (WDS). Pumping operations can be activated by using simple ON-OFF trigger levels without advanced control system. A static optimal solution of simple trigger levels may not be robust when being applied to real world situation. In this paper, we propose an integer programming model using the idea of Additional Time Slots of [Quintiliani et. al., Water Resour Manage, 2019] to make the optimal solution more robust. In addition, computational results on real-world data with single and multiple pumps will be presented.","PeriodicalId":6860,"journal":{"name":"2021 RIVF International Conference on Computing and Communication Technologies (RIVF)","volume":"235 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2021-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87087915","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-19DOI: 10.1109/RIVF51545.2021.9642121
Hung Le, H. To, Hung An, Khanh Ho, K. Nguyen, Thua Nguyen, Tien Do, T. Ngo, Duy-Dinh Le
Recognizing text from receipts is a significant step in automating office processes for many fields such as finance and accounting. MC-OCR Challenge has formed this problem into two tasks (1) evaluating the quality, and (2) recognizing required fields of the captured receipt. Our proposed framework is based on three key components: preprocessing with receipt detection using Faster R-CNN, alignment based on the angle and direction of rotation; estimate the receipt image quality score in task 1 using EfficientNet-B4 which has been retrained using transfer learning; while PAN is for text detection and VietOCR1 for text recognition. In the final round, our systems have achieved the best result in task 1 (0.1 RMSE) and a comparable result with other teams (0.3 CER) in task 2 which demonstrated the effectiveness of the proposed method.
{"title":"MC-OCR Challenge 2021: An end-to-end recognition framework for Vietnamese Receipts","authors":"Hung Le, H. To, Hung An, Khanh Ho, K. Nguyen, Thua Nguyen, Tien Do, T. Ngo, Duy-Dinh Le","doi":"10.1109/RIVF51545.2021.9642121","DOIUrl":"https://doi.org/10.1109/RIVF51545.2021.9642121","url":null,"abstract":"Recognizing text from receipts is a significant step in automating office processes for many fields such as finance and accounting. MC-OCR Challenge has formed this problem into two tasks (1) evaluating the quality, and (2) recognizing required fields of the captured receipt. Our proposed framework is based on three key components: preprocessing with receipt detection using Faster R-CNN, alignment based on the angle and direction of rotation; estimate the receipt image quality score in task 1 using EfficientNet-B4 which has been retrained using transfer learning; while PAN is for text detection and VietOCR1 for text recognition. In the final round, our systems have achieved the best result in task 1 (0.1 RMSE) and a comparable result with other teams (0.3 CER) in task 2 which demonstrated the effectiveness of the proposed method.","PeriodicalId":6860,"journal":{"name":"2021 RIVF International Conference on Computing and Communication Technologies (RIVF)","volume":"6 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2021-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83373757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-19DOI: 10.1109/RIVF51545.2021.9642097
S. Nguyen, Thi-Thu-Hong Le, Thai-Hoc Lu, Trung-Thanh Nguyen, Quang-Khai Tran, Hai Vu
Forearm and palm segmentation is a crucial element in the hand-pose estimation and mobility of the arms and hands estimation. However, hand parts separations (i.e, forearm and palm) from egocentric images have less been explored. In this study, we propose a novel hand forearm and palm segmentation method using the distance transformation map and an SVM classifier. First, we use a distance transformation map of the hand mask to find circles inscribing on the hand mask. These circles are vectored and construct an SVM classifier to predict the correct circle for approximating the palm region. Based on the predicted palm area, we propose a method for forearm and palm segmentations. The proposed method is evaluated on the rehabilitation dataset which includes egocentric hand mask images extracted from the hand rehabilitation exercise egocentric images. The results show that the proposed method successfully segments hand and forearm with high accuracy and requires low computational time.
{"title":"Hand part segmentations in hand mask of egocentric images using Distance Transformation Map and SVM Classifier","authors":"S. Nguyen, Thi-Thu-Hong Le, Thai-Hoc Lu, Trung-Thanh Nguyen, Quang-Khai Tran, Hai Vu","doi":"10.1109/RIVF51545.2021.9642097","DOIUrl":"https://doi.org/10.1109/RIVF51545.2021.9642097","url":null,"abstract":"Forearm and palm segmentation is a crucial element in the hand-pose estimation and mobility of the arms and hands estimation. However, hand parts separations (i.e, forearm and palm) from egocentric images have less been explored. In this study, we propose a novel hand forearm and palm segmentation method using the distance transformation map and an SVM classifier. First, we use a distance transformation map of the hand mask to find circles inscribing on the hand mask. These circles are vectored and construct an SVM classifier to predict the correct circle for approximating the palm region. Based on the predicted palm area, we propose a method for forearm and palm segmentations. The proposed method is evaluated on the rehabilitation dataset which includes egocentric hand mask images extracted from the hand rehabilitation exercise egocentric images. The results show that the proposed method successfully segments hand and forearm with high accuracy and requires low computational time.","PeriodicalId":6860,"journal":{"name":"2021 RIVF International Conference on Computing and Communication Technologies (RIVF)","volume":"40 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2021-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73268231","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-19DOI: 10.1109/RIVF51545.2021.9642071
H. T. Le, Dung T. Cao, Trung Bui, Long T. Luong, Huy-Quang Nguyen
Automatic detection of semantically equivalent questions is a task of the utmost importance in a question answering system. The Quora dataset, which was released in the Quora Question Pairs competition organized by Kaggle, has now been used by many researches to train the system in solving the task of identifying duplicate questions. However, the ground truth labels on this dataset are not 100% accurate and may include incorrect labeling. In this paper, we concentrate on improving the quality of the Quora dataset by combining several strategies, basing on Bert, rules, and reassigning labels by humans.
{"title":"Improve Quora Question Pair Dataset for Question Similarity Task","authors":"H. T. Le, Dung T. Cao, Trung Bui, Long T. Luong, Huy-Quang Nguyen","doi":"10.1109/RIVF51545.2021.9642071","DOIUrl":"https://doi.org/10.1109/RIVF51545.2021.9642071","url":null,"abstract":"Automatic detection of semantically equivalent questions is a task of the utmost importance in a question answering system. The Quora dataset, which was released in the Quora Question Pairs competition organized by Kaggle, has now been used by many researches to train the system in solving the task of identifying duplicate questions. However, the ground truth labels on this dataset are not 100% accurate and may include incorrect labeling. In this paper, we concentrate on improving the quality of the Quora dataset by combining several strategies, basing on Bert, rules, and reassigning labels by humans.","PeriodicalId":6860,"journal":{"name":"2021 RIVF International Conference on Computing and Communication Technologies (RIVF)","volume":"19 1","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2021-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77717404","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}