Pub Date : 2012-03-21DOI: 10.1109/ICPRIME.2012.6208385
D. Arul, Pon Daniel, K. Thangavel, R. Subash, C. Boss
Authentic and accurate information is basic to any disease control initiative. More than 70% of diseases are related to life-style factors such as food and beverage practices, personal habits, infections, tobacco consumption and social customs. In addition, urbanization, industrialization and increasing life-span are also known to influence the cancer pattern globally. This necessitates proper appreciation of risk factors and other causes of cancer by the people. Various modalities for early detection through screening are being investigated. Majority of the patients have locally advanced or disseminated disease at presentation and are not candidates for surgery. Chemotherapy applied as an adjunct with radiation improves survival and the quality of life. New anticancer drugs, which have emerged during the last decade, have shown an improved efficacy toxicity ratio. This review is more about the diagnosing cancer at an early stage using invasive electronic sensors and intelligent computing methods by capturing only the breath of the human being. Strengthening the methods for early diagnosis of cancers and improved treatments will have a significant impact on cutting death rates.
{"title":"A review of early detection of cancers using breath analysis","authors":"D. Arul, Pon Daniel, K. Thangavel, R. Subash, C. Boss","doi":"10.1109/ICPRIME.2012.6208385","DOIUrl":"https://doi.org/10.1109/ICPRIME.2012.6208385","url":null,"abstract":"Authentic and accurate information is basic to any disease control initiative. More than 70% of diseases are related to life-style factors such as food and beverage practices, personal habits, infections, tobacco consumption and social customs. In addition, urbanization, industrialization and increasing life-span are also known to influence the cancer pattern globally. This necessitates proper appreciation of risk factors and other causes of cancer by the people. Various modalities for early detection through screening are being investigated. Majority of the patients have locally advanced or disseminated disease at presentation and are not candidates for surgery. Chemotherapy applied as an adjunct with radiation improves survival and the quality of life. New anticancer drugs, which have emerged during the last decade, have shown an improved efficacy toxicity ratio. This review is more about the diagnosing cancer at an early stage using invasive electronic sensors and intelligent computing methods by capturing only the breath of the human being. Strengthening the methods for early diagnosis of cancers and improved treatments will have a significant impact on cutting death rates.","PeriodicalId":148511,"journal":{"name":"International Conference on Pattern Recognition, Informatics and Medical Engineering (PRIME-2012)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131206364","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-03-21DOI: 10.1109/ICPRIME.2012.6208295
H. Inbarani, P. K. Nizar Banu
Feature reduction selects more informative features and reduces the dimensionality of a database by removing the irrelevant features. Selecting features in unsupervised learning scenarios is a harder problem than supervised feature selection due to the absence of class labels that would guide the search for relevant features. Rough set is proved to be efficient tool for feature reduction and needs no additional information. PSO (Particle Swarm Optimization) is an evolutionary computation technique which finds global optimum solution in many applications. This work combines the benefits of both PSO and rough sets for better data reduction. This paper describes a novel Unsupervised PSO based Relative Reduct (US-PSO-RR) for feature selection which employs a population of particles existing within a multi-dimensional space and dependency measure. The performance of the proposed algorithm is compared with the existing unsupervised feature selection methods USQR (UnSupervised Quick Reduct) and USSR (UnSupervised Relative Reduct) and the effectiveness of the proposed approach is measured by using Clustering evaluation indices.
{"title":"Unsupervised hybrid PSO — Relative reduct approach for feature reduction","authors":"H. Inbarani, P. K. Nizar Banu","doi":"10.1109/ICPRIME.2012.6208295","DOIUrl":"https://doi.org/10.1109/ICPRIME.2012.6208295","url":null,"abstract":"Feature reduction selects more informative features and reduces the dimensionality of a database by removing the irrelevant features. Selecting features in unsupervised learning scenarios is a harder problem than supervised feature selection due to the absence of class labels that would guide the search for relevant features. Rough set is proved to be efficient tool for feature reduction and needs no additional information. PSO (Particle Swarm Optimization) is an evolutionary computation technique which finds global optimum solution in many applications. This work combines the benefits of both PSO and rough sets for better data reduction. This paper describes a novel Unsupervised PSO based Relative Reduct (US-PSO-RR) for feature selection which employs a population of particles existing within a multi-dimensional space and dependency measure. The performance of the proposed algorithm is compared with the existing unsupervised feature selection methods USQR (UnSupervised Quick Reduct) and USSR (UnSupervised Relative Reduct) and the effectiveness of the proposed approach is measured by using Clustering evaluation indices.","PeriodicalId":148511,"journal":{"name":"International Conference on Pattern Recognition, Informatics and Medical Engineering (PRIME-2012)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126758131","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-03-21DOI: 10.1109/ICPRIME.2012.6208350
K. Kuppusamy, K. Thamodaran
An encryption scheme provides security against illegal duplication and manipulation of multimedia contents especially to digital images. In this paper a novel optimized partial image encryption scheme based on Particle swarm optimization and the Daubechies4 transform is developed which provide solutions to the issues such as statistical attacks and confidentiality. The selected coefficients are encrypted in Daubechies4 domain with help of particle swarm optimization(PSO). The IQIM is used to measure the image quality distortion based on three factors such as loss of correlation, luminance distortion, and contrast distortion. The experimental results are presented to demonstrate the effectiveness of the proposed scheme.
{"title":"Optimized partial image encryption scheme using PSO","authors":"K. Kuppusamy, K. Thamodaran","doi":"10.1109/ICPRIME.2012.6208350","DOIUrl":"https://doi.org/10.1109/ICPRIME.2012.6208350","url":null,"abstract":"An encryption scheme provides security against illegal duplication and manipulation of multimedia contents especially to digital images. In this paper a novel optimized partial image encryption scheme based on Particle swarm optimization and the Daubechies4 transform is developed which provide solutions to the issues such as statistical attacks and confidentiality. The selected coefficients are encrypted in Daubechies4 domain with help of particle swarm optimization(PSO). The IQIM is used to measure the image quality distortion based on three factors such as loss of correlation, luminance distortion, and contrast distortion. The experimental results are presented to demonstrate the effectiveness of the proposed scheme.","PeriodicalId":148511,"journal":{"name":"International Conference on Pattern Recognition, Informatics and Medical Engineering (PRIME-2012)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115642850","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-03-21DOI: 10.1109/ICPRIME.2012.6208380
Gethsiyal Augasta M, T. Kathirvalavakumar
Though neural networks have achieved highest classification accuracy for many classification problems, the obtained results may not be interpretable as they are often considered as black box. To overcome this drawback researchers have developed many rule extraction algorithms. This paper has discussed on various rule extraction algorithms based on three different rule extraction approaches namely decompositional, pedagogical and eclectic. Also it evaluates the performance of those approaches by comparing different algorithms with these three approaches on three real datasets namely Wisconsin breast cancer, Pima Indian diabetes and Iris plants.
{"title":"Rule extraction from neural networks — A comparative study","authors":"Gethsiyal Augasta M, T. Kathirvalavakumar","doi":"10.1109/ICPRIME.2012.6208380","DOIUrl":"https://doi.org/10.1109/ICPRIME.2012.6208380","url":null,"abstract":"Though neural networks have achieved highest classification accuracy for many classification problems, the obtained results may not be interpretable as they are often considered as black box. To overcome this drawback researchers have developed many rule extraction algorithms. This paper has discussed on various rule extraction algorithms based on three different rule extraction approaches namely decompositional, pedagogical and eclectic. Also it evaluates the performance of those approaches by comparing different algorithms with these three approaches on three real datasets namely Wisconsin breast cancer, Pima Indian diabetes and Iris plants.","PeriodicalId":148511,"journal":{"name":"International Conference on Pattern Recognition, Informatics and Medical Engineering (PRIME-2012)","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114201328","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-03-21DOI: 10.1109/ICPRIME.2012.6208340
D. Rajavel, S. Shantharajah
We propose a new cryptographic algorithm based on combination of hybridization and rotation of cubes. Hybridization was performed using magic cubes with m number of n order magic square for the generating hybrid cubes. The obtained hybrid cube was shuffled via rotation square, which in turn generated from randomly selected magic square. Cubic rotation was performed as same that of simple Rubik's cube shuffling. In general, two phase of rotation was carried out, in which first one involves in the rotation of hybrid cube to generate a key and the second one involves in the rotation of original text to give rise a cipher text. The generated key was in cubical form and cipher text generated from this encryption algorithm is more secure from cryptanalysis.
{"title":"Cubical key generation and encryption algorithm based on hybrid cube's rotation","authors":"D. Rajavel, S. Shantharajah","doi":"10.1109/ICPRIME.2012.6208340","DOIUrl":"https://doi.org/10.1109/ICPRIME.2012.6208340","url":null,"abstract":"We propose a new cryptographic algorithm based on combination of hybridization and rotation of cubes. Hybridization was performed using magic cubes with m number of n order magic square for the generating hybrid cubes. The obtained hybrid cube was shuffled via rotation square, which in turn generated from randomly selected magic square. Cubic rotation was performed as same that of simple Rubik's cube shuffling. In general, two phase of rotation was carried out, in which first one involves in the rotation of hybrid cube to generate a key and the second one involves in the rotation of original text to give rise a cipher text. The generated key was in cubical form and cipher text generated from this encryption algorithm is more secure from cryptanalysis.","PeriodicalId":148511,"journal":{"name":"International Conference on Pattern Recognition, Informatics and Medical Engineering (PRIME-2012)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117156358","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-03-21DOI: 10.1109/ICPRIME.2012.6208348
G. Kumar, M. Kaliappan, L. J. Julus
Recent year a rapid development and widespread application of mobile ad hoc networks suffer from security attacks and privacy issues which dramatically impede their applications. To cope with the attacks, a large variety of intrusion detection techniques such as authentication, authorization, cryptographic protocols and key management schemes have been developed. Clustering methods allow fast connection, better routing and topology management of mobile ad hoc networks (MANET). This paper, we have introduced new mechanism called Energy Efficiency and Secure Communication Protocol (EESCP) is to divide the MANET into a set of 2-hop clusters where each node belongs to at least one cluster. The nodes in each cluster elect a leader node (cluster head) to serve as the IDS for the entire cluster. To balance the resource consumption weight based leader election model is used, which elected an optimal collection of leaders to minimize the overall resource consumption and obtaining secure communication using diffie-Hellman key exchange protocol.
{"title":"Enhancing the performance of MANET using EESCP","authors":"G. Kumar, M. Kaliappan, L. J. Julus","doi":"10.1109/ICPRIME.2012.6208348","DOIUrl":"https://doi.org/10.1109/ICPRIME.2012.6208348","url":null,"abstract":"Recent year a rapid development and widespread application of mobile ad hoc networks suffer from security attacks and privacy issues which dramatically impede their applications. To cope with the attacks, a large variety of intrusion detection techniques such as authentication, authorization, cryptographic protocols and key management schemes have been developed. Clustering methods allow fast connection, better routing and topology management of mobile ad hoc networks (MANET). This paper, we have introduced new mechanism called Energy Efficiency and Secure Communication Protocol (EESCP) is to divide the MANET into a set of 2-hop clusters where each node belongs to at least one cluster. The nodes in each cluster elect a leader node (cluster head) to serve as the IDS for the entire cluster. To balance the resource consumption weight based leader election model is used, which elected an optimal collection of leaders to minimize the overall resource consumption and obtaining secure communication using diffie-Hellman key exchange protocol.","PeriodicalId":148511,"journal":{"name":"International Conference on Pattern Recognition, Informatics and Medical Engineering (PRIME-2012)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130231436","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-03-21DOI: 10.1109/ICPRIME.2012.6208286
D. A. Kumar, M. Annie, T. Begum
Clustering is the process of partitioning a set of objects into a distinct number of groups or clusters, such that objects from the same group are more similar than objects from different groups. Clusters are the simple and compact representation of a data set and are useful in applications, where we have no prior knowledge about the data set. There are many approaches to data clustering that vary in their complexity and effectiveness due to its wide number of applications. K-means is a standard and landmark algorithm for clustering data. This multi-pass algorithm has higher time complexity. But in real time we want the algorithm which is time efficient. Hence, here we are giving a new approach using wiener transformation. Here the data is wiener transformed for k-means clustering. The computational results shows that the proposed approach is highly time efficient and also it finds very fine clusters.
{"title":"Computational time factor analysis of K-means algorithm on actual and transformed data clustering","authors":"D. A. Kumar, M. Annie, T. Begum","doi":"10.1109/ICPRIME.2012.6208286","DOIUrl":"https://doi.org/10.1109/ICPRIME.2012.6208286","url":null,"abstract":"Clustering is the process of partitioning a set of objects into a distinct number of groups or clusters, such that objects from the same group are more similar than objects from different groups. Clusters are the simple and compact representation of a data set and are useful in applications, where we have no prior knowledge about the data set. There are many approaches to data clustering that vary in their complexity and effectiveness due to its wide number of applications. K-means is a standard and landmark algorithm for clustering data. This multi-pass algorithm has higher time complexity. But in real time we want the algorithm which is time efficient. Hence, here we are giving a new approach using wiener transformation. Here the data is wiener transformed for k-means clustering. The computational results shows that the proposed approach is highly time efficient and also it finds very fine clusters.","PeriodicalId":148511,"journal":{"name":"International Conference on Pattern Recognition, Informatics and Medical Engineering (PRIME-2012)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122418124","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-03-21DOI: 10.1109/ICPRIME.2012.6208389
P. Ambika, J. A. Samath
With the internet technology development and the popularization of multimedia technology, especially images and visual information because of its rich and varied information, has become an important part of information retrieval. The traditional information retrieval techniques do not meet the users demand. Recently content based image retrieval has become the hottest topic and techniques of content based image retrieval has achieved great development. Image retrieval methods based on color, texture shape and semantics are discussed, analyzed and compared. The semantic based image retrieval is a better way to solve the semantic - gap problem, so Ontology - based web image retrieval method is stressed in this article. This model considers the ontological requirements in usability, intelligence and effectiveness. Based on the proposed content based and model based annotation models, the image query becomes easy and effective. Through empirical evaluations, our annotation models can deliver accurate results for semantic web image retrieval.
{"title":"Ontology — Based semantic web CBIR by utilizing content and model annotations","authors":"P. Ambika, J. A. Samath","doi":"10.1109/ICPRIME.2012.6208389","DOIUrl":"https://doi.org/10.1109/ICPRIME.2012.6208389","url":null,"abstract":"With the internet technology development and the popularization of multimedia technology, especially images and visual information because of its rich and varied information, has become an important part of information retrieval. The traditional information retrieval techniques do not meet the users demand. Recently content based image retrieval has become the hottest topic and techniques of content based image retrieval has achieved great development. Image retrieval methods based on color, texture shape and semantics are discussed, analyzed and compared. The semantic based image retrieval is a better way to solve the semantic - gap problem, so Ontology - based web image retrieval method is stressed in this article. This model considers the ontological requirements in usability, intelligence and effectiveness. Based on the proposed content based and model based annotation models, the image query becomes easy and effective. Through empirical evaluations, our annotation models can deliver accurate results for semantic web image retrieval.","PeriodicalId":148511,"journal":{"name":"International Conference on Pattern Recognition, Informatics and Medical Engineering (PRIME-2012)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127243451","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-03-21DOI: 10.1109/ICPRIME.2012.6208364
M. Sumathi, R. Barani
Image fusion is the process of combining information from two or more images of a same scene into a single composite image that is more informative and is more suitable for visual perception or computer processing. The main objective of this paper is to implement the various pixel level fusion algorithms and to determine how well the information contained in the source images are represented in the fused images on multimodality and multifocusing images. Experiments and qualitative metrics dictate that Laplacian Pyramid method performs better on both multimodality and multifocusing images.
{"title":"Qualitative evaluation of pixel level image fusion algorithms","authors":"M. Sumathi, R. Barani","doi":"10.1109/ICPRIME.2012.6208364","DOIUrl":"https://doi.org/10.1109/ICPRIME.2012.6208364","url":null,"abstract":"Image fusion is the process of combining information from two or more images of a same scene into a single composite image that is more informative and is more suitable for visual perception or computer processing. The main objective of this paper is to implement the various pixel level fusion algorithms and to determine how well the information contained in the source images are represented in the fused images on multimodality and multifocusing images. Experiments and qualitative metrics dictate that Laplacian Pyramid method performs better on both multimodality and multifocusing images.","PeriodicalId":148511,"journal":{"name":"International Conference on Pattern Recognition, Informatics and Medical Engineering (PRIME-2012)","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132542058","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-03-21DOI: 10.1109/ICPRIME.2012.6208384
Arunpriya C P S G R, Balasaravanan T, Antony Selvadoss Thanamani
Recognition of plants has become an active area of research as most of the plant species are at the risk of extinction. This paper uses an efficient machine learning approach for the classification purpose. This proposed approach consists of three phases such as preprocessing, feature extraction and classification. The preprocessing phase involves a typical image processing steps such as transforming to gray scale and boundary enhancement. The feature extraction phase derives the common DMF from five fundamental features. The main contribution of this approach is the Support Vector Machine (SVM) classification for efficient leaf recognition. 12 leaf features which are extracted and orthogonalized into 5 principal variables are given as input vector to the SVM. Classifier tested with flavia dataset and a real dataset and compared with k-NN approach, the proposed approach produces very high accuracy and takes very less execution time.
{"title":"An efficient leaf recognition algorithm for plant classification using support vector machine","authors":"Arunpriya C P S G R, Balasaravanan T, Antony Selvadoss Thanamani","doi":"10.1109/ICPRIME.2012.6208384","DOIUrl":"https://doi.org/10.1109/ICPRIME.2012.6208384","url":null,"abstract":"Recognition of plants has become an active area of research as most of the plant species are at the risk of extinction. This paper uses an efficient machine learning approach for the classification purpose. This proposed approach consists of three phases such as preprocessing, feature extraction and classification. The preprocessing phase involves a typical image processing steps such as transforming to gray scale and boundary enhancement. The feature extraction phase derives the common DMF from five fundamental features. The main contribution of this approach is the Support Vector Machine (SVM) classification for efficient leaf recognition. 12 leaf features which are extracted and orthogonalized into 5 principal variables are given as input vector to the SVM. Classifier tested with flavia dataset and a real dataset and compared with k-NN approach, the proposed approach produces very high accuracy and takes very less execution time.","PeriodicalId":148511,"journal":{"name":"International Conference on Pattern Recognition, Informatics and Medical Engineering (PRIME-2012)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129234976","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}