Pub Date : 2013-05-13DOI: 10.1109/IADCC.2013.6514318
C. D. Gracia, S. Sudha
The proliferation of World Wide Web and the immense growth of Internet users and services requiring high bandwidth have increased the response time of the users substantially. Thus, users often experience long latency while retrieving web objects. The popularity of web objects and web sites show a considerable spatial locality that makes it possible to predict future accesses based on the previous accessed ones. This infact has motivated the researchers to devise new prefetching techniques in web so as to reduce the user perceived latency. Most of the research works are based on the standard Prediction by Partial Match model and its derivates such as the Longest Repeating Sequence and the Popularity based model that are built into Markov predictor trees using common surfing patterns. These models require lot of memory. Hence, in this paper, memory efficient Prediction by Partial Match models based on Markov model are proposed to minimize memory usage compared to the standard Prediction models and its derivatives.
{"title":"MePPM- Memory efficient prediction by partial match model for web prefetching","authors":"C. D. Gracia, S. Sudha","doi":"10.1109/IADCC.2013.6514318","DOIUrl":"https://doi.org/10.1109/IADCC.2013.6514318","url":null,"abstract":"The proliferation of World Wide Web and the immense growth of Internet users and services requiring high bandwidth have increased the response time of the users substantially. Thus, users often experience long latency while retrieving web objects. The popularity of web objects and web sites show a considerable spatial locality that makes it possible to predict future accesses based on the previous accessed ones. This infact has motivated the researchers to devise new prefetching techniques in web so as to reduce the user perceived latency. Most of the research works are based on the standard Prediction by Partial Match model and its derivates such as the Longest Repeating Sequence and the Popularity based model that are built into Markov predictor trees using common surfing patterns. These models require lot of memory. Hence, in this paper, memory efficient Prediction by Partial Match models based on Markov model are proposed to minimize memory usage compared to the standard Prediction models and its derivatives.","PeriodicalId":325901,"journal":{"name":"2013 3rd IEEE International Advance Computing Conference (IACC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130339979","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-05-13DOI: 10.1109/IADCC.2013.6514347
S. Rawat, D. R. Patil
The World Wide Web continues to grow at an exponential rate, so fetching information about a special-topic is gaining importance which poses exceptional scaling challenges for general-purpose crawlers and search engines. This paper describes a web crawling approach based on best first search. As the goal of a focused crawler is to selectively seek out pages that are relevant to given keywords. Rather than collecting and indexing all available web documents to be able to answer all possible queries, a focused crawler analyze its crawl boundary to hit upon the links that are likely to be most relevant for the crawl, and avoids irrelevant links of the document. This leads to significant savings in hardware as well as network resources and also helps keep the crawl more up-to-date. To accomplish such goal-directed crawling, we select top most k relevant documents for a given query and then expand the most promising link chosen according to link score, to circumvent irrelevant regions of the web.
{"title":"Efficient focused crawling based on best first search","authors":"S. Rawat, D. R. Patil","doi":"10.1109/IADCC.2013.6514347","DOIUrl":"https://doi.org/10.1109/IADCC.2013.6514347","url":null,"abstract":"The World Wide Web continues to grow at an exponential rate, so fetching information about a special-topic is gaining importance which poses exceptional scaling challenges for general-purpose crawlers and search engines. This paper describes a web crawling approach based on best first search. As the goal of a focused crawler is to selectively seek out pages that are relevant to given keywords. Rather than collecting and indexing all available web documents to be able to answer all possible queries, a focused crawler analyze its crawl boundary to hit upon the links that are likely to be most relevant for the crawl, and avoids irrelevant links of the document. This leads to significant savings in hardware as well as network resources and also helps keep the crawl more up-to-date. To accomplish such goal-directed crawling, we select top most k relevant documents for a given query and then expand the most promising link chosen according to link score, to circumvent irrelevant regions of the web.","PeriodicalId":325901,"journal":{"name":"2013 3rd IEEE International Advance Computing Conference (IACC)","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130635623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-05-13DOI: 10.1109/IADCC.2013.6514305
R. Mehra, Ashutosh Kumar Singh
Recently received signal strength (RSS)-based distance estimation technique has been proposed as a low complexity, low-cost solution for mobile communication node with minimum RSSI error. After investigating the existing algorithm of location technique, it is observed that the distribution of RSSI-value at each sample point is fluctuant even in the same position due to shadow fading effect. Therefore, here present a novel method for RSSI error reduction in distance estimation using recursive least square (RLS)-algorithm to the existing deterministic algorithms. The proposed method collects RSSI-values from the mobile communication node to build the probability model. Once the probability models are estimated for different standard deviation related to path loss exponent using adaptive filtering in real time, it is possible to accurately determine the distance between the mobile communication node and fixed communication node. From simulation results it is shown, that the accuracy of RSSI-value for mobile communication node in real time distance estimation is improved in changing environments.
{"title":"Real time RSSI error reduction in distance estimation using RLS algorithm","authors":"R. Mehra, Ashutosh Kumar Singh","doi":"10.1109/IADCC.2013.6514305","DOIUrl":"https://doi.org/10.1109/IADCC.2013.6514305","url":null,"abstract":"Recently received signal strength (RSS)-based distance estimation technique has been proposed as a low complexity, low-cost solution for mobile communication node with minimum RSSI error. After investigating the existing algorithm of location technique, it is observed that the distribution of RSSI-value at each sample point is fluctuant even in the same position due to shadow fading effect. Therefore, here present a novel method for RSSI error reduction in distance estimation using recursive least square (RLS)-algorithm to the existing deterministic algorithms. The proposed method collects RSSI-values from the mobile communication node to build the probability model. Once the probability models are estimated for different standard deviation related to path loss exponent using adaptive filtering in real time, it is possible to accurately determine the distance between the mobile communication node and fixed communication node. From simulation results it is shown, that the accuracy of RSSI-value for mobile communication node in real time distance estimation is improved in changing environments.","PeriodicalId":325901,"journal":{"name":"2013 3rd IEEE International Advance Computing Conference (IACC)","volume":"85 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132367331","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-05-13DOI: 10.1109/IADCC.2013.6514328
R. Singh, D. Sharma
As web is the largest collection of information and plenty of pages or documents are newly added and deleted on frequent basis due to the dynamic nature of the web. The information present on the web is of great need, the world is full of questions and the web is serving as the major source of gaining information about specific query made by the user. As per the search engine for the query, a number of pages are retrieved among which the quality of the page that are retrieved is questioned. On the retrieved pages the search engine apply certain algorithms to bring an order to the pages retrieved so that the most relevant document or pages are displayed at the top of list. In this paper a new page ranking algorithm known as the RatioRank is discussed, in which the inlink weights and outlink weights are used with the consideration of number of visit count and is compared with some algorithms by using certain parameters.
{"title":"RatioRank: Enhancing the impact of inlinks and outlinks","authors":"R. Singh, D. Sharma","doi":"10.1109/IADCC.2013.6514328","DOIUrl":"https://doi.org/10.1109/IADCC.2013.6514328","url":null,"abstract":"As web is the largest collection of information and plenty of pages or documents are newly added and deleted on frequent basis due to the dynamic nature of the web. The information present on the web is of great need, the world is full of questions and the web is serving as the major source of gaining information about specific query made by the user. As per the search engine for the query, a number of pages are retrieved among which the quality of the page that are retrieved is questioned. On the retrieved pages the search engine apply certain algorithms to bring an order to the pages retrieved so that the most relevant document or pages are displayed at the top of list. In this paper a new page ranking algorithm known as the RatioRank is discussed, in which the inlink weights and outlink weights are used with the consideration of number of visit count and is compared with some algorithms by using certain parameters.","PeriodicalId":325901,"journal":{"name":"2013 3rd IEEE International Advance Computing Conference (IACC)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132460281","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-05-13DOI: 10.1109/IADCC.2013.6514413
P. Goyal, N. Bansal, N. Gupta
In the recent years, there has been a tremendous rise in online banking transactions. Some of the major factors contributing to this growth are - easy availability of higher bandwidth internet connections at low cost and ease of online transactions compared to the traditional banking methods. Also, this field has attracted the cybercriminals to take advantage of the loopholes in the process of online transaction. One of the most advanced threats today is the Man In The Browser (MitB) attack. MitB attack deploys a Trojan in the browser of the user and then steals the credentials of the user to initiate a transaction for transferring the money in a mule account. All this happens without any notice to bank or the user. MitB attack defeats the traditional two factor authentication currently used by the banks to authenticate the identity of the user performing the transaction. This paper presents a new framework for enhancing authentication during an online transaction to tackle the problem of MitB attack.
{"title":"Averting man in the browser attack using user-specific personal images","authors":"P. Goyal, N. Bansal, N. Gupta","doi":"10.1109/IADCC.2013.6514413","DOIUrl":"https://doi.org/10.1109/IADCC.2013.6514413","url":null,"abstract":"In the recent years, there has been a tremendous rise in online banking transactions. Some of the major factors contributing to this growth are - easy availability of higher bandwidth internet connections at low cost and ease of online transactions compared to the traditional banking methods. Also, this field has attracted the cybercriminals to take advantage of the loopholes in the process of online transaction. One of the most advanced threats today is the Man In The Browser (MitB) attack. MitB attack deploys a Trojan in the browser of the user and then steals the credentials of the user to initiate a transaction for transferring the money in a mule account. All this happens without any notice to bank or the user. MitB attack defeats the traditional two factor authentication currently used by the banks to authenticate the identity of the user performing the transaction. This paper presents a new framework for enhancing authentication during an online transaction to tackle the problem of MitB attack.","PeriodicalId":325901,"journal":{"name":"2013 3rd IEEE International Advance Computing Conference (IACC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128221195","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-05-13DOI: 10.1109/IADCC.2013.6514403
A. Chaudhari, J. Kulkarni
Magnetic Resonance Imaging (MRI) offers a lot of information for medical examination. Fast, accurate and reproducible segmentation of MRI is desirable in many applications. Brain image segmentation is important from clinical point of view for detection of tumor. Brain images mostly contain noise, inhomogeneity and sometimes deviation. Therefore, accurate segmentation of brain images is a very difficult task. In this paper we present an automatic method of brain segmentation for detection of tumor. The MR images from T1, T2 and flair sequences are used for the study along with axial, coronal and sagitial slices. The segmentation of MR images is done using textural features based on gray level co occurrence matrix. The textural feature used is the entropy of image.
{"title":"Local entropy based brain MR image segmentation","authors":"A. Chaudhari, J. Kulkarni","doi":"10.1109/IADCC.2013.6514403","DOIUrl":"https://doi.org/10.1109/IADCC.2013.6514403","url":null,"abstract":"Magnetic Resonance Imaging (MRI) offers a lot of information for medical examination. Fast, accurate and reproducible segmentation of MRI is desirable in many applications. Brain image segmentation is important from clinical point of view for detection of tumor. Brain images mostly contain noise, inhomogeneity and sometimes deviation. Therefore, accurate segmentation of brain images is a very difficult task. In this paper we present an automatic method of brain segmentation for detection of tumor. The MR images from T1, T2 and flair sequences are used for the study along with axial, coronal and sagitial slices. The segmentation of MR images is done using textural features based on gray level co occurrence matrix. The textural feature used is the entropy of image.","PeriodicalId":325901,"journal":{"name":"2013 3rd IEEE International Advance Computing Conference (IACC)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134189497","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-05-13DOI: 10.1109/IADCC.2013.6514412
V. Chandrakanth, S. Tripathi
Fourier transform algorithm has encompassed diverse fields of engineering including specialized fields like radars, communications and image processing systems. Therefore there have been continual efforts to improve the efficiency of FFT implementation in real time systems and other hardware. To reduce design time and time to market, FPGA vendors have developed IP cores which can be readily used in our applications. But these IP core designs though efficient are highly abstract and do not provide the designer to modify them according to his requirement which leads to inefficient design realization. Vendor provided IP cores do not give access to FFT kernel matrix thus restricting the configurability and efficiency of using them. In this paper we have designed a customized architecture to perform FFT with access to twiddle factors for improved configurability. The designed architecture is further modified to perform variable point FFT targeted for application in multirate systems. The architecture designed is generic and can be implemented on any vendor platform.
{"title":"Customized architecture for implementing configurable FFT on FPGA","authors":"V. Chandrakanth, S. Tripathi","doi":"10.1109/IADCC.2013.6514412","DOIUrl":"https://doi.org/10.1109/IADCC.2013.6514412","url":null,"abstract":"Fourier transform algorithm has encompassed diverse fields of engineering including specialized fields like radars, communications and image processing systems. Therefore there have been continual efforts to improve the efficiency of FFT implementation in real time systems and other hardware. To reduce design time and time to market, FPGA vendors have developed IP cores which can be readily used in our applications. But these IP core designs though efficient are highly abstract and do not provide the designer to modify them according to his requirement which leads to inefficient design realization. Vendor provided IP cores do not give access to FFT kernel matrix thus restricting the configurability and efficiency of using them. In this paper we have designed a customized architecture to perform FFT with access to twiddle factors for improved configurability. The designed architecture is further modified to perform variable point FFT targeted for application in multirate systems. The architecture designed is generic and can be implemented on any vendor platform.","PeriodicalId":325901,"journal":{"name":"2013 3rd IEEE International Advance Computing Conference (IACC)","volume":"183 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134304984","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-05-13DOI: 10.1109/IADCC.2013.6514298
Zubair Khan Ravindra Singh, Sumit Sanwal, Arun Gangwar, Shabbir Alam
In this paper we are proposing a new approach for tasks allocation in a massively parallel system using Finite Automata. On the basis of task flow model of finite automata., we find the turnaround time for a parallel system using finite automata as a directed acyclic graph in the second section of the paper we discuss regarding the finite automata and directed acyclic graph after that we change finite automata into DAG for massively parallel system. All the simulations are performing in Intel C++ parallel compiler and compare these results with several interesting scheduling algorithms and we get better turnaround time.
{"title":"Task allocation in a massively parallel system using Finite Automata","authors":"Zubair Khan Ravindra Singh, Sumit Sanwal, Arun Gangwar, Shabbir Alam","doi":"10.1109/IADCC.2013.6514298","DOIUrl":"https://doi.org/10.1109/IADCC.2013.6514298","url":null,"abstract":"In this paper we are proposing a new approach for tasks allocation in a massively parallel system using Finite Automata. On the basis of task flow model of finite automata., we find the turnaround time for a parallel system using finite automata as a directed acyclic graph in the second section of the paper we discuss regarding the finite automata and directed acyclic graph after that we change finite automata into DAG for massively parallel system. All the simulations are performing in Intel C++ parallel compiler and compare these results with several interesting scheduling algorithms and we get better turnaround time.","PeriodicalId":325901,"journal":{"name":"2013 3rd IEEE International Advance Computing Conference (IACC)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134334379","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-05-13DOI: 10.1109/IADCC.2013.6514300
S. Dash, S. Dehuri, S. Rayaguru
In this paper, a parallel genetic based association rule mining method is proposed to discover interesting rules from a large biological database. Apriori algorithms and its variants for association rule mining rely on two user specified threshold parameters such as minimum support and minimum confidence which is obviously an issue to be resolved. In addition, there are other issues like large search space and local optimality attracts many researchers to use heuristic mechanism. In the presence of large biological databases and with an aim to circumvent these problems, genetic algorithm may be taken as a suitable tool, but its computational cost is the main bottle-neck. Therefore, we choose parallel genetic algorithms to get relief from the pain of computational cost. The experimental result is promising and encouraging to do further research especially in the domain of biological science.
{"title":"Discovering interesting rules from biological data using parallel genetic algorithm","authors":"S. Dash, S. Dehuri, S. Rayaguru","doi":"10.1109/IADCC.2013.6514300","DOIUrl":"https://doi.org/10.1109/IADCC.2013.6514300","url":null,"abstract":"In this paper, a parallel genetic based association rule mining method is proposed to discover interesting rules from a large biological database. Apriori algorithms and its variants for association rule mining rely on two user specified threshold parameters such as minimum support and minimum confidence which is obviously an issue to be resolved. In addition, there are other issues like large search space and local optimality attracts many researchers to use heuristic mechanism. In the presence of large biological databases and with an aim to circumvent these problems, genetic algorithm may be taken as a suitable tool, but its computational cost is the main bottle-neck. Therefore, we choose parallel genetic algorithms to get relief from the pain of computational cost. The experimental result is promising and encouraging to do further research especially in the domain of biological science.","PeriodicalId":325901,"journal":{"name":"2013 3rd IEEE International Advance Computing Conference (IACC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133475894","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-05-13DOI: 10.1109/IADCC.2013.6514245
V. Pal, G. Singh, R. Yadav
Energy efficiency is main design issue for protocols of wireless sensor networks. Node clustering is an energy efficient approach for sensor networks. In clustering algorithms, nodes are grouped into independent clusters and each cluster has a cluster head. Data units gathered at base station depends upon lifetime of network. Cluster head selection is an important issue for energy efficiency of clustering schemes. Intra cluster communication distance depends upon position of cluster head in cluster. In this paper, a new cluster head selection scheme is proposed. Proposed scheme can be implemented with any distributed clustering scheme. In proposed scheme, network area is divided into two parts: border area and inner area. Scheme restricts cluster head selection to only inner area nodes. Scheme is implemented and simulated with LEACH in NS-2. Simulation shows that proposed scheme significantly outperform LEACH for network lifetime and data gathering rate.
{"title":"Cluster head selection scheme for data centric wireless sensor networks","authors":"V. Pal, G. Singh, R. Yadav","doi":"10.1109/IADCC.2013.6514245","DOIUrl":"https://doi.org/10.1109/IADCC.2013.6514245","url":null,"abstract":"Energy efficiency is main design issue for protocols of wireless sensor networks. Node clustering is an energy efficient approach for sensor networks. In clustering algorithms, nodes are grouped into independent clusters and each cluster has a cluster head. Data units gathered at base station depends upon lifetime of network. Cluster head selection is an important issue for energy efficiency of clustering schemes. Intra cluster communication distance depends upon position of cluster head in cluster. In this paper, a new cluster head selection scheme is proposed. Proposed scheme can be implemented with any distributed clustering scheme. In proposed scheme, network area is divided into two parts: border area and inner area. Scheme restricts cluster head selection to only inner area nodes. Scheme is implemented and simulated with LEACH in NS-2. Simulation shows that proposed scheme significantly outperform LEACH for network lifetime and data gathering rate.","PeriodicalId":325901,"journal":{"name":"2013 3rd IEEE International Advance Computing Conference (IACC)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123978514","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}