This paper presents a concept for a new method to provide the authentication and confidentiality using zero knowledge protocol and key exchange. Zero knowledge proof protocol is a essential component of cryptography, which in recent years has increasingly popular amongst scholars. Its applications have widened and it has made inroads in several areas including mathematics and network safety and so on. This simple protocol based on zero knowledge proof by which user can prove to the authentication server that he has the password without having to send the password to the server either clear text or in encrypted format. This is a protocol in which the data learned by one party (i.e., The inspector) allow him/her to verify that a statement is true but does not reveal any additional information. In this paper we first discuss about zero-knowledge protocol proof system of knowledge and also key exchange between users and which then is modified into an authentication scheme with secret key exchange for confidentiality. The whole protocol involves mutual identification of two users, exchange of a random common secret key or session key for the verification of public keys.
{"title":"An Alternative Methodology for Authentication and Confidentiality Based on Zero Knowledge Protocols Using Diffie-Hellman Key Exchange","authors":"P. Lalitha Surya Kumari, A. Damodaram","doi":"10.1109/ICIT.2014.39","DOIUrl":"https://doi.org/10.1109/ICIT.2014.39","url":null,"abstract":"This paper presents a concept for a new method to provide the authentication and confidentiality using zero knowledge protocol and key exchange. Zero knowledge proof protocol is a essential component of cryptography, which in recent years has increasingly popular amongst scholars. Its applications have widened and it has made inroads in several areas including mathematics and network safety and so on. This simple protocol based on zero knowledge proof by which user can prove to the authentication server that he has the password without having to send the password to the server either clear text or in encrypted format. This is a protocol in which the data learned by one party (i.e., The inspector) allow him/her to verify that a statement is true but does not reveal any additional information. In this paper we first discuss about zero-knowledge protocol proof system of knowledge and also key exchange between users and which then is modified into an authentication scheme with secret key exchange for confidentiality. The whole protocol involves mutual identification of two users, exchange of a random common secret key or session key for the verification of public keys.","PeriodicalId":6486,"journal":{"name":"2014 17th International Conference on Computer and Information Technology (ICCIT)","volume":"1 1","pages":"368-373"},"PeriodicalIF":0.0,"publicationDate":"2014-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79235767","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The diffusion least mean squares (LMS) [1] algorithm gives faster convergence than the original LMS in a distributed network. Also, it outperforms other distributed LMS algorithms like spatial LMS and incremental LMS [2]. However, both LMS and diffusion-LMS are not applicable in non-linear environments where data may not be linearly separable [3]. A variant of LMS called kernel-LMS (KLMS) has been proposed in [3] for such non-linearities. We intend to propose the kernelised version of diffusion-LMS in this paper.
{"title":"The Diffusion-KLMS Algorithm","authors":"R. Mitra, V. Bhatia","doi":"10.1109/ICIT.2014.33","DOIUrl":"https://doi.org/10.1109/ICIT.2014.33","url":null,"abstract":"The diffusion least mean squares (LMS) [1] algorithm gives faster convergence than the original LMS in a distributed network. Also, it outperforms other distributed LMS algorithms like spatial LMS and incremental LMS [2]. However, both LMS and diffusion-LMS are not applicable in non-linear environments where data may not be linearly separable [3]. A variant of LMS called kernel-LMS (KLMS) has been proposed in [3] for such non-linearities. We intend to propose the kernelised version of diffusion-LMS in this paper.","PeriodicalId":6486,"journal":{"name":"2014 17th International Conference on Computer and Information Technology (ICCIT)","volume":"96 1","pages":"256-259"},"PeriodicalIF":0.0,"publicationDate":"2014-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72982917","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Map Matching is a well-established problem which deals with mapping raw time stamped location traces to edges of road network graph. Location data traces may be from devices like GPS, Mobile Signals etc. It has applicability in mining travel patterns, route prediction, vehicle turn prediction and resource prediction in grid computing etc. Existing map matching algorithms are designed to run on vertical scalable frameworks (enhancing CPU, Disk storage, Network Resources etc.). Vertical scaling has known limitations and implementation difficulties. In this paper we present a framework for horizontal scaling of map-matching algorithm, which overcomes limitations of vertical scaling. This framework uses Hbase for data storage and map-reduce computation framework. Both of these technologies belong to big data technology stack. Proposed framework is evaluated by running ST-matching based map matching algorithm.
{"title":"Framework for Horizontal Scaling of Map Matching: Using Map-Reduce","authors":"V. Tiwari, Arti Arya, Sudha Chaturvedi","doi":"10.1109/ICIT.2014.70","DOIUrl":"https://doi.org/10.1109/ICIT.2014.70","url":null,"abstract":"Map Matching is a well-established problem which deals with mapping raw time stamped location traces to edges of road network graph. Location data traces may be from devices like GPS, Mobile Signals etc. It has applicability in mining travel patterns, route prediction, vehicle turn prediction and resource prediction in grid computing etc. Existing map matching algorithms are designed to run on vertical scalable frameworks (enhancing CPU, Disk storage, Network Resources etc.). Vertical scaling has known limitations and implementation difficulties. In this paper we present a framework for horizontal scaling of map-matching algorithm, which overcomes limitations of vertical scaling. This framework uses Hbase for data storage and map-reduce computation framework. Both of these technologies belong to big data technology stack. Proposed framework is evaluated by running ST-matching based map matching algorithm.","PeriodicalId":6486,"journal":{"name":"2014 17th International Conference on Computer and Information Technology (ICCIT)","volume":"1 1","pages":"30-34"},"PeriodicalIF":0.0,"publicationDate":"2014-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72679944","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
B. K. Nayak, Monalisa Mishra, S. C. Rai, S. Pradhan
In the development of wireless sensor networks (WSNs) applications, organizing sensor nodes into a communication network and route the sensed data from sensor nodes to a remote sink is a challenging task. Energy efficient and reliable routing of data from the source to destination with minimal power consumption remains as a core research problem. So, in WSN we need an efficient protocol to route any transmitted data with extended lifetime of network. In this paper, we propose a novel clustering algorithm, Front-Leading Energy Efficient Cluster Heads (FLEECH), in which the whole network is partitioned into regions with diminishing sizes. In each region, we form multiple clusters. The selection of the Cluster Head (CH) is based on residual energy and distance of each node to the sink as its parameter. Simulation results show that our proposed model FLEECH outperforms Low Energy Adaptive Clustering Hierarchy (LEACH) with respect to energy consumption and extension of network life time.
{"title":"A Novel Cluster Head Selection Method for Energy Efficient Wireless Sensor Network","authors":"B. K. Nayak, Monalisa Mishra, S. C. Rai, S. Pradhan","doi":"10.1109/ICIT.2014.74","DOIUrl":"https://doi.org/10.1109/ICIT.2014.74","url":null,"abstract":"In the development of wireless sensor networks (WSNs) applications, organizing sensor nodes into a communication network and route the sensed data from sensor nodes to a remote sink is a challenging task. Energy efficient and reliable routing of data from the source to destination with minimal power consumption remains as a core research problem. So, in WSN we need an efficient protocol to route any transmitted data with extended lifetime of network. In this paper, we propose a novel clustering algorithm, Front-Leading Energy Efficient Cluster Heads (FLEECH), in which the whole network is partitioned into regions with diminishing sizes. In each region, we form multiple clusters. The selection of the Cluster Head (CH) is based on residual energy and distance of each node to the sink as its parameter. Simulation results show that our proposed model FLEECH outperforms Low Energy Adaptive Clustering Hierarchy (LEACH) with respect to energy consumption and extension of network life time.","PeriodicalId":6486,"journal":{"name":"2014 17th International Conference on Computer and Information Technology (ICCIT)","volume":"12 1","pages":"53-57"},"PeriodicalIF":0.0,"publicationDate":"2014-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75144335","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A data space system manages a variety of heterogeneous data in integrated and incremental fashion. In order to populate the data uniformly requires a data publication tool which will able to extract the data from the data sources, and translate them into a regular format. In this paper, we have proposed a flexible architecture of Data space Publishing Tool (DSP Tool) for publishing the heterogeneous data into a data space. This tool is simple and easy to customize. The proposed architecture is based on the data space system principle (i.e., Pay-as-you-go), and populates the data space without modeling the semantic heterogeneity of data.
{"title":"An Architecture of DSP Tool for Publishing the Heterogeneous Data in Dataspace","authors":"Mrityunjay Singh, Shubhangi Jain, V. Panchal","doi":"10.1109/ICIT.2014.23","DOIUrl":"https://doi.org/10.1109/ICIT.2014.23","url":null,"abstract":"A data space system manages a variety of heterogeneous data in integrated and incremental fashion. In order to populate the data uniformly requires a data publication tool which will able to extract the data from the data sources, and translate them into a regular format. In this paper, we have proposed a flexible architecture of Data space Publishing Tool (DSP Tool) for publishing the heterogeneous data into a data space. This tool is simple and easy to customize. The proposed architecture is based on the data space system principle (i.e., Pay-as-you-go), and populates the data space without modeling the semantic heterogeneity of data.","PeriodicalId":6486,"journal":{"name":"2014 17th International Conference on Computer and Information Technology (ICCIT)","volume":"66 1","pages":"209-214"},"PeriodicalIF":0.0,"publicationDate":"2014-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73533116","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recommender systems are the software or technical tools that help user to find out items/things according to his/her preferences from a wide range of items/things. For example, selecting a movie from a large database of movies from on-line or selecting a song of his/her own kind from a large number of songs available in the internet and much more. In order to generate recommendations for the users the system has to first learn the user preferences from the user's past behaviours so that it can predict new items/things that are suitable for the respective user. These systems generally learn user's preferences from user's past experiences, using any machine learning algorithm and predict new items/things for the user using the learned preferences. In this paper we introduce a different approach to recommender system which will learn rules for user preferences using classification based on Decision Lists. We have followed two Decision List based classification algorithms like Repeated Incremental Pruning to Produce Error Reduction and Predictive Rule Mining, for learning rules for users past behaviours. We also list out our proposed recommendation algorithm and discuss the advantages as well as disadvantages of our approach to recommender system with the traditional approaches. We have validated our recommender system with the movie lens data set that contains hundred thousand movie ratings from different users, which is the bench mark dataset for recommender system testing.
{"title":"An Approach to Content Based Recommender Systems Using Decision List Based Classification with k-DNF Rule Set","authors":"Abinash Pujahari, V. Padmanabhan","doi":"10.1109/ICIT.2014.13","DOIUrl":"https://doi.org/10.1109/ICIT.2014.13","url":null,"abstract":"Recommender systems are the software or technical tools that help user to find out items/things according to his/her preferences from a wide range of items/things. For example, selecting a movie from a large database of movies from on-line or selecting a song of his/her own kind from a large number of songs available in the internet and much more. In order to generate recommendations for the users the system has to first learn the user preferences from the user's past behaviours so that it can predict new items/things that are suitable for the respective user. These systems generally learn user's preferences from user's past experiences, using any machine learning algorithm and predict new items/things for the user using the learned preferences. In this paper we introduce a different approach to recommender system which will learn rules for user preferences using classification based on Decision Lists. We have followed two Decision List based classification algorithms like Repeated Incremental Pruning to Produce Error Reduction and Predictive Rule Mining, for learning rules for users past behaviours. We also list out our proposed recommendation algorithm and discuss the advantages as well as disadvantages of our approach to recommender system with the traditional approaches. We have validated our recommender system with the movie lens data set that contains hundred thousand movie ratings from different users, which is the bench mark dataset for recommender system testing.","PeriodicalId":6486,"journal":{"name":"2014 17th International Conference on Computer and Information Technology (ICCIT)","volume":"53 1","pages":"260-263"},"PeriodicalIF":0.0,"publicationDate":"2014-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79299490","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cognitive Radio (CR) is a key technology that has been proposed to exploit the unused spectrum holes opportunistically. In CR communication, the issues of shadowing, multipath fading affect performance of spectrum sensing function that impacts the detection performance of secondary users (SUs). Cooperative Spectrum Sensing (CSS) has proven to be an emerging scheme which significantly improves spectrum sensing performance by utilizing spatial diversity of the SUs. This paper proposes a game theory based coalition model. The main contribution of this work is to consider the constraint during cooperation due to cost involved in reporting time and reporting energy. A formulation has been proposed to decide the optimal size of a coalition and a scheme for dynamic selection of coalition head. In the game, the condition to achieve the coalition stability is carried out. Simulation results have shown the efficacy of the proposed model that enhances the sensing performance significantly.
{"title":"Constraint Based Cooperative Spectrum Sensing for Cognitive Radio Network","authors":"S. Deka, Prakash Chauhan, N. Sarma","doi":"10.1109/ICIT.2014.12","DOIUrl":"https://doi.org/10.1109/ICIT.2014.12","url":null,"abstract":"Cognitive Radio (CR) is a key technology that has been proposed to exploit the unused spectrum holes opportunistically. In CR communication, the issues of shadowing, multipath fading affect performance of spectrum sensing function that impacts the detection performance of secondary users (SUs). Cooperative Spectrum Sensing (CSS) has proven to be an emerging scheme which significantly improves spectrum sensing performance by utilizing spatial diversity of the SUs. This paper proposes a game theory based coalition model. The main contribution of this work is to consider the constraint during cooperation due to cost involved in reporting time and reporting energy. A formulation has been proposed to decide the optimal size of a coalition and a scheme for dynamic selection of coalition head. In the game, the condition to achieve the coalition stability is carried out. Simulation results have shown the efficacy of the proposed model that enhances the sensing performance significantly.","PeriodicalId":6486,"journal":{"name":"2014 17th International Conference on Computer and Information Technology (ICCIT)","volume":"8 1","pages":"63-68"},"PeriodicalIF":0.0,"publicationDate":"2014-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89793031","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The paper proposes a modified version of Differential Evolution (DE) algorithm and optimization criterion function for extractive text summarization applications. Cosine Similarity measure has been used to cluster similar sentences based on a proposed criterion function designed for the text summarization problem, and important sentences from each cluster are selected to generate a summary of the document. The modified Differential Evolution model ensures integer state values and hence expedites the optimization as compared to conventional DE approach. Experiments showed a 95.5% improvement in time in the Discrete DE approach over the conventional DE approach, while the precision and recall of extracted summaries remained comparable in all cases.
{"title":"Discrete Differential Evolution for Text Summarization","authors":"Shweta Karwa, N. Chatterjee","doi":"10.1109/ICIT.2014.28","DOIUrl":"https://doi.org/10.1109/ICIT.2014.28","url":null,"abstract":"The paper proposes a modified version of Differential Evolution (DE) algorithm and optimization criterion function for extractive text summarization applications. Cosine Similarity measure has been used to cluster similar sentences based on a proposed criterion function designed for the text summarization problem, and important sentences from each cluster are selected to generate a summary of the document. The modified Differential Evolution model ensures integer state values and hence expedites the optimization as compared to conventional DE approach. Experiments showed a 95.5% improvement in time in the Discrete DE approach over the conventional DE approach, while the precision and recall of extracted summaries remained comparable in all cases.","PeriodicalId":6486,"journal":{"name":"2014 17th International Conference on Computer and Information Technology (ICCIT)","volume":"22 1","pages":"129-133"},"PeriodicalIF":0.0,"publicationDate":"2014-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86621592","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The main components for building today's computer systems are Chip Multiprocessors, where multiple processor cores are placed on the same chip. These cores can run several threads of an application or can run multiple applications at the same time. Efficient execution of such applications depends on the ability of the on-chip interconnect to support multicast with minimum overhead. Many of the existing works present efficient solutions to one-to-one and broadcast communication but for multicast they assume the existence of on-chip router capability to provide several point-to-point links one for each destination. In this paper we propose a new approach for multicast routing for on-chip networks with minimum hardware support. Our approach minimizes the average hop traversal of each replica within the network by selecting replication points based on the distribution density of the destination nodes. Experimental results have shown that link utilization and link power consumption have been reduced by 8.36% in case of an 8 x 8 mesh and for mesh networks of dimensions 12 x 12 and 16 x 16 the percentage reduction is 16%, showing the scalability of our approach.
{"title":"An Approach for Multicast Routing in Networks-on-Chip","authors":"M. Prasad, Shirshendu Das, H. Kapoor","doi":"10.1109/ICIT.2014.41","DOIUrl":"https://doi.org/10.1109/ICIT.2014.41","url":null,"abstract":"The main components for building today's computer systems are Chip Multiprocessors, where multiple processor cores are placed on the same chip. These cores can run several threads of an application or can run multiple applications at the same time. Efficient execution of such applications depends on the ability of the on-chip interconnect to support multicast with minimum overhead. Many of the existing works present efficient solutions to one-to-one and broadcast communication but for multicast they assume the existence of on-chip router capability to provide several point-to-point links one for each destination. In this paper we propose a new approach for multicast routing for on-chip networks with minimum hardware support. Our approach minimizes the average hop traversal of each replica within the network by selecting replication points based on the distribution density of the destination nodes. Experimental results have shown that link utilization and link power consumption have been reduced by 8.36% in case of an 8 x 8 mesh and for mesh networks of dimensions 12 x 12 and 16 x 16 the percentage reduction is 16%, showing the scalability of our approach.","PeriodicalId":6486,"journal":{"name":"2014 17th International Conference on Computer and Information Technology (ICCIT)","volume":"2 1","pages":"299-304"},"PeriodicalIF":0.0,"publicationDate":"2014-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81488349","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Clustering is an unsupervised classification method where objects in the unlabeled data set are classified on the basis of some similarity measure. The conventional partitional clustering algorithms, e.g., K-Means, K-Medoids have several disadvantages such as the final solution is dependent on initial solution, they easily stuck into local optima. The nature inspired population based global search optimization methods offer to be more effective to overcome the deficiencies of the conventional partitional clustering methods as they possess several desired key features like up gradation of the candidate solutions iteratively, decentralization, parallel nature, and self organizing behavior. In this work, we compare the performance of widely applied evolutionary algorithms namely Genetic Algorithm (GA) and Differential Evolution (DE), and swarm intelligence methods namely Particle Swarm Optimization (PSO) and Artificial Bee Colony (ABC) to find the clustering solutions by evaluating the quality of cluster with internal validity criteria, Sum of Square Error (SSE), which is based on compactness of cluster. Extensive results are compared based on three real and one synthetic data sets.
{"title":"Evolutionary and Swarm Intelligence Methods for Partitional Hard Clustering","authors":"J. Prakash, P. Singh","doi":"10.1109/ICIT.2014.67","DOIUrl":"https://doi.org/10.1109/ICIT.2014.67","url":null,"abstract":"Clustering is an unsupervised classification method where objects in the unlabeled data set are classified on the basis of some similarity measure. The conventional partitional clustering algorithms, e.g., K-Means, K-Medoids have several disadvantages such as the final solution is dependent on initial solution, they easily stuck into local optima. The nature inspired population based global search optimization methods offer to be more effective to overcome the deficiencies of the conventional partitional clustering methods as they possess several desired key features like up gradation of the candidate solutions iteratively, decentralization, parallel nature, and self organizing behavior. In this work, we compare the performance of widely applied evolutionary algorithms namely Genetic Algorithm (GA) and Differential Evolution (DE), and swarm intelligence methods namely Particle Swarm Optimization (PSO) and Artificial Bee Colony (ABC) to find the clustering solutions by evaluating the quality of cluster with internal validity criteria, Sum of Square Error (SSE), which is based on compactness of cluster. Extensive results are compared based on three real and one synthetic data sets.","PeriodicalId":6486,"journal":{"name":"2014 17th International Conference on Computer and Information Technology (ICCIT)","volume":"6 1","pages":"264-269"},"PeriodicalIF":0.0,"publicationDate":"2014-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84284094","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}