In this paper, we rstly propose to perform prognostic analysis of polypoidal choroidal vasculopathy (PCV) using indocyanine green angiography (ICGA) sequence. Our goal is to develop a computer-aided diagnostic system which can predict the likely treatment outcome of patients with PCV based on their before-treatment ICGA sequences. In order to create a prognostic model for PCV, we utilize both the before-treatment and the aftertreatment ICGA sequences collected in the EVEREST study. By comparing the before-treatment and the after-treatment PCV region in ICGA sequences, we can generate positive and negative samples for training our prognostic model. Here, we design an 8-layer convolution neural network (CNN) and use it to serve as the prognostic model. We have conducted experiments using 17 patients cases. In particular, we perform leave-one-out cross validation so that each patient can be utilized as testing case once. Our proposed method achieves promising results on the EVEREST dataset.
{"title":"Prognostic Analysis of Polypoidal Choroidal Vasculopathy Using an Image-Based Approach","authors":"Yong-ming Chen, Wei-Yang Lin, Chia-Ling Tsai","doi":"10.1109/ICS.2016.0088","DOIUrl":"https://doi.org/10.1109/ICS.2016.0088","url":null,"abstract":"In this paper, we rstly propose to perform prognostic analysis of polypoidal choroidal vasculopathy (PCV) using indocyanine green angiography (ICGA) sequence. Our goal is to develop a computer-aided diagnostic system which can predict the likely treatment outcome of patients with PCV based on their before-treatment ICGA sequences. In order to create a prognostic model for PCV, we utilize both the before-treatment and the aftertreatment ICGA sequences collected in the EVEREST study. By comparing the before-treatment and the after-treatment PCV region in ICGA sequences, we can generate positive and negative samples for training our prognostic model. Here, we design an 8-layer convolution neural network (CNN) and use it to serve as the prognostic model. We have conducted experiments using 17 patients cases. In particular, we perform leave-one-out cross validation so that each patient can be utilized as testing case once. Our proposed method achieves promising results on the EVEREST dataset.","PeriodicalId":281088,"journal":{"name":"2016 International Computer Symposium (ICS)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123735746","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Due to the need of indoor emergency rescue for earthquake disasters, we propose the iPOST (Indoor Post-Earthquake Corridor Obstacle Assessment System) to determine the obstacles in the corridors after earthquakes. Before an earthquake hits, a pre-earthquake image is used to segment its corresponding floor area for determining the presence of obstacles. After an earthquake, the pre-earthquake and post-earthquake images are compared by using the interimage foreground technique for assessing the obstacles in the corridors. To verify the effectiveness of the iPOST, we collected over 40 pairs of images captured by corridor surveillance cameras from YouTube and Google, which included corridor scenarios with and without the presence of obstacles. The experiment results show that the iPOST's accuracy in obstacle assessment reaches 84%.
{"title":"Indoor Post-Earthquake Corridor Obstacle Assessment System","authors":"Hankui Zhang, E. Chu, Shih-Yu Chen","doi":"10.1109/ICS.2016.0084","DOIUrl":"https://doi.org/10.1109/ICS.2016.0084","url":null,"abstract":"Due to the need of indoor emergency rescue for earthquake disasters, we propose the iPOST (Indoor Post-Earthquake Corridor Obstacle Assessment System) to determine the obstacles in the corridors after earthquakes. Before an earthquake hits, a pre-earthquake image is used to segment its corresponding floor area for determining the presence of obstacles. After an earthquake, the pre-earthquake and post-earthquake images are compared by using the interimage foreground technique for assessing the obstacles in the corridors. To verify the effectiveness of the iPOST, we collected over 40 pairs of images captured by corridor surveillance cameras from YouTube and Google, which included corridor scenarios with and without the presence of obstacles. The experiment results show that the iPOST's accuracy in obstacle assessment reaches 84%.","PeriodicalId":281088,"journal":{"name":"2016 International Computer Symposium (ICS)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115507130","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Let G be a graph with minimum degree at least 2. A vertex subset S is a 2-tuple total dominating set of G if every vertex is adjacent to at least two vertices in S. The 2-tuple total domination number of G is the minimum size of a 2-tuple total dominating set. In this paper, we are concerned with the 2-tuple total domination number of a Harary graph H2m+1, 2n+1 with 2n+1 = (2m+1)l. For m = 1 and m = 2, we show that the numbers are 2l and 2l+1, respectively.
{"title":"A Note on the 2-Tuple Total Domination Problem in Harary Graphs","authors":"Si-Han Yang, Hung-Lung Wang","doi":"10.1109/ICS.2016.0022","DOIUrl":"https://doi.org/10.1109/ICS.2016.0022","url":null,"abstract":"Let G be a graph with minimum degree at least 2. A vertex subset S is a 2-tuple total dominating set of G if every vertex is adjacent to at least two vertices in S. The 2-tuple total domination number of G is the minimum size of a 2-tuple total dominating set. In this paper, we are concerned with the 2-tuple total domination number of a Harary graph H2m+1, 2n+1 with 2n+1 = (2m+1)l. For m = 1 and m = 2, we show that the numbers are 2l and 2l+1, respectively.","PeriodicalId":281088,"journal":{"name":"2016 International Computer Symposium (ICS)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122433147","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we proposed an image registration method for digitized cadastral images in Taiwan. The proposed method is composed of three parts: feature selection, RANSAC-based transform parameter estimation, and image stitching. In the feature selection, Harris Corner Detection is first used to extract corners as feature points for each engineering image and then some feature points are selected manually. To reduce the impact of wrong matched pairs on transform parameter estimation, the RANSAC-based transform parameter estimation is developed. After removing the wrong feature point pairs, the least squares error estimation method is used to estimate transform parameters. The image stitching between source image and reference image can be performed based on the estimated transform parameters. Experimental results show that the proposed method can not only effectively select suitable feature point pairs for parameter estimation but also stitch source image and reference image well.
{"title":"An Image Registration Method for Engineering Images","authors":"Jing-Dai Jiang, Guo-Shiang Lin","doi":"10.1109/ICS.2016.0092","DOIUrl":"https://doi.org/10.1109/ICS.2016.0092","url":null,"abstract":"In this paper, we proposed an image registration method for digitized cadastral images in Taiwan. The proposed method is composed of three parts: feature selection, RANSAC-based transform parameter estimation, and image stitching. In the feature selection, Harris Corner Detection is first used to extract corners as feature points for each engineering image and then some feature points are selected manually. To reduce the impact of wrong matched pairs on transform parameter estimation, the RANSAC-based transform parameter estimation is developed. After removing the wrong feature point pairs, the least squares error estimation method is used to estimate transform parameters. The image stitching between source image and reference image can be performed based on the estimated transform parameters. Experimental results show that the proposed method can not only effectively select suitable feature point pairs for parameter estimation but also stitch source image and reference image well.","PeriodicalId":281088,"journal":{"name":"2016 International Computer Symposium (ICS)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125008034","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chih-Hung Hsieh, Cheng-Hao Yan, Ching-Hao Mao, Chi-Ping Lai, Jenq-Shiou Leu
Due to more and more on-premises services are migrating onto cloud, user behavioral analysis then gets popular as a data-driven way to administer lots accounts of on-cloud services. This paper proposes a novel rule-based approach, GMiner, for mining different types of Google cloud drive usages as an unsupervised account-management approach. Experiment results show that GMiner provides accurate, inter-pretable, and visualized clustering results which are helpful for highlighting inactive, quasi-insider accounts, or other potential cyber-security risks from real-environment dataset.
{"title":"GMiner: Rule-Based Fuzzy Clustering for Google Drive Behavioral Type Mining","authors":"Chih-Hung Hsieh, Cheng-Hao Yan, Ching-Hao Mao, Chi-Ping Lai, Jenq-Shiou Leu","doi":"10.1109/ICS.2016.0028","DOIUrl":"https://doi.org/10.1109/ICS.2016.0028","url":null,"abstract":"Due to more and more on-premises services are migrating onto cloud, user behavioral analysis then gets popular as a data-driven way to administer lots accounts of on-cloud services. This paper proposes a novel rule-based approach, GMiner, for mining different types of Google cloud drive usages as an unsupervised account-management approach. Experiment results show that GMiner provides accurate, inter-pretable, and visualized clustering results which are helpful for highlighting inactive, quasi-insider accounts, or other potential cyber-security risks from real-environment dataset.","PeriodicalId":281088,"journal":{"name":"2016 International Computer Symposium (ICS)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126701730","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A graph G is said to have a crossing if two edges of G share an interior point. The minimum crossing number of G is denoted by cr(G). The crossing number problem is to find the minimum crossing solution of a graph, and it can be used in applications of circuit layout. Although the crossing numbers of join product graphs have been extensively studied, the crossing number of join product of power graphs with path is relatively unexplored. Let Pm and Pn be paths with m and n vertices, and Dn be a graph consisting of n isolated vertices. In this paper, we investigate the crossing number of kth power of path Pm that joins with isolated vertices Dn and path Pn. We have proved the minimum crossing numbers of Pkm+Dn for m ≤ 6, n ≥ 1, and Pkm+Pn for m ≤ 6, n ≥ 2.
{"title":"The Crossing Number of Join Product of kth Power of Path Pm with Isolated Vertices and Path Pn","authors":"S. Hsieh, Cheng-Chian Lin","doi":"10.1109/ICS.2016.0021","DOIUrl":"https://doi.org/10.1109/ICS.2016.0021","url":null,"abstract":"A graph G is said to have a crossing if two edges of G share an interior point. The minimum crossing number of G is denoted by cr(G). The crossing number problem is to find the minimum crossing solution of a graph, and it can be used in applications of circuit layout. Although the crossing numbers of join product graphs have been extensively studied, the crossing number of join product of power graphs with path is relatively unexplored. Let Pm and Pn be paths with m and n vertices, and Dn be a graph consisting of n isolated vertices. In this paper, we investigate the crossing number of kth power of path Pm that joins with isolated vertices Dn and path Pn. We have proved the minimum crossing numbers of Pkm+Dn for m ≤ 6, n ≥ 1, and Pkm+Pn for m ≤ 6, n ≥ 2.","PeriodicalId":281088,"journal":{"name":"2016 International Computer Symposium (ICS)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122294227","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper introduces a unified constraint-based test case generator for white-box method-level unit testing. The derivation of a suite of test cases can be defined as a constraint satisfaction problem. Each test case consists of a test input and an expected output. The program is automatically transformed into a constraint model called constraint logic graph. The constraint logic graph is a succinct graphical representation of the system of constraints that defines the relationships between the test inputs and actual outputs. The suite of test inputs can be solved from the conjunction of constraints on each complete path of the constraint logic graph by a constraint logic programming language. The specification of a method is defined by the Object Constraint Language. This non-executable specification is automatically transformed into an executable specification defined by a constraint logic programming language. This executable specification serves as the test oracle to automatically generate the corresponding expected output for a given test input.
{"title":"Constraint-Based Test Case Generation for White-Box Method-Level Unit Testing","authors":"Chen-Huei Chang, Nai-Wei Lin","doi":"10.1109/ICS.2016.0123","DOIUrl":"https://doi.org/10.1109/ICS.2016.0123","url":null,"abstract":"This paper introduces a unified constraint-based test case generator for white-box method-level unit testing. The derivation of a suite of test cases can be defined as a constraint satisfaction problem. Each test case consists of a test input and an expected output. The program is automatically transformed into a constraint model called constraint logic graph. The constraint logic graph is a succinct graphical representation of the system of constraints that defines the relationships between the test inputs and actual outputs. The suite of test inputs can be solved from the conjunction of constraints on each complete path of the constraint logic graph by a constraint logic programming language. The specification of a method is defined by the Object Constraint Language. This non-executable specification is automatically transformed into an executable specification defined by a constraint logic programming language. This executable specification serves as the test oracle to automatically generate the corresponding expected output for a given test input.","PeriodicalId":281088,"journal":{"name":"2016 International Computer Symposium (ICS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131246683","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shang-Pin Ma, Peng-Zhong Chen, Y. Ma, Jheng-Shiun Jiang
The retrieval and the composition of information from multiple apps, services, or local resources can be time-consuming, costly, and inconvenient. To build an effective, efficient, and easy-to-use mobile service composition and delivery approach, in this research, we propose an approach called CARSB (Composite App with RESTful Services and Bricks). In CARSB, we introduce the concept of Service Brick, which is a rectangular UI component used for the display of specific information, devise a mobile service composition framework, which can integrate Service Bricks with backend RESTful services, and provide a web-based software tool, called CARSB Portal, to allow ordinary users to build their customized composite mobile applications according to their requirements. Notably, the CARSB Portal can facilitate the construction, discovery, composition, preview, and delivery of Service Bricks, i.e., CARSB Portal supports all activities for the lifecycle of Service Brick. Besides, in this research, quantitative experiments were conducted to verify the proposed CARSB approach, experiment results demonstrate that the proposed CARSB approach is able to achieve a considerable decrease in operation time and network transmission load.
从多个应用程序、服务或本地资源中检索和组合信息可能非常耗时、昂贵且不方便。为了构建一种有效、高效且易于使用的移动服务组合和交付方法,在本研究中,我们提出了一种称为CARSB (RESTful Services and Bricks的复合应用程序)的方法。在CARSB中,我们引入了Service Brick(用于显示特定信息的矩形UI组件)的概念,设计了一个移动服务组合框架,将Service Brick与后端rest式服务集成在一起,并提供了一个基于web的软件工具CARSB Portal,允许普通用户根据自己的需求构建定制化的复合移动应用。值得注意的是,CARSB门户可以促进服务块的构造、发现、组合、预览和交付,也就是说,CARSB门户支持服务块生命周期的所有活动。此外,本研究还进行了定量实验对所提出的CARSB方法进行了验证,实验结果表明,所提出的CARSB方法能够显著减少运行时间和网络传输负荷。
{"title":"CARSB Portal: A Web-Based Software Tool to Composing Service Bricks and RESTful Services as Mobile Apps","authors":"Shang-Pin Ma, Peng-Zhong Chen, Y. Ma, Jheng-Shiun Jiang","doi":"10.1109/ICS.2016.0119","DOIUrl":"https://doi.org/10.1109/ICS.2016.0119","url":null,"abstract":"The retrieval and the composition of information from multiple apps, services, or local resources can be time-consuming, costly, and inconvenient. To build an effective, efficient, and easy-to-use mobile service composition and delivery approach, in this research, we propose an approach called CARSB (Composite App with RESTful Services and Bricks). In CARSB, we introduce the concept of Service Brick, which is a rectangular UI component used for the display of specific information, devise a mobile service composition framework, which can integrate Service Bricks with backend RESTful services, and provide a web-based software tool, called CARSB Portal, to allow ordinary users to build their customized composite mobile applications according to their requirements. Notably, the CARSB Portal can facilitate the construction, discovery, composition, preview, and delivery of Service Bricks, i.e., CARSB Portal supports all activities for the lifecycle of Service Brick. Besides, in this research, quantitative experiments were conducted to verify the proposed CARSB approach, experiment results demonstrate that the proposed CARSB approach is able to achieve a considerable decrease in operation time and network transmission load.","PeriodicalId":281088,"journal":{"name":"2016 International Computer Symposium (ICS)","volume":"167 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131794764","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chien-Yu Chiou, P. Chung, Chun-Rong Huang, M. Chang
To reduce the chance of traffic crashes, many driver monitoring systems (DMSs) have been developed. A DMS warns the driver under abnormal driving conditions. However, traditional approaches require enumerating abnormal driving conditions. In this paper, we propose a novel DMS, which models the driver's normal driving statuses based on sparse reconstruction. The proposed DMS compares the driver's statuses with his/her personal normal driving status model and identifies abnormal driving statuses that greatly change the driver's appearances. The experimental results show good performance of the proposed DMS to detect variant abnormal driver conditions.
{"title":"Abnormal Driving Behavior Detection Using Sparse Representation","authors":"Chien-Yu Chiou, P. Chung, Chun-Rong Huang, M. Chang","doi":"10.1109/ICS.2016.0085","DOIUrl":"https://doi.org/10.1109/ICS.2016.0085","url":null,"abstract":"To reduce the chance of traffic crashes, many driver monitoring systems (DMSs) have been developed. A DMS warns the driver under abnormal driving conditions. However, traditional approaches require enumerating abnormal driving conditions. In this paper, we propose a novel DMS, which models the driver's normal driving statuses based on sparse reconstruction. The proposed DMS compares the driver's statuses with his/her personal normal driving status model and identifies abnormal driving statuses that greatly change the driver's appearances. The experimental results show good performance of the proposed DMS to detect variant abnormal driver conditions.","PeriodicalId":281088,"journal":{"name":"2016 International Computer Symposium (ICS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132367286","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Periodicity mining is used for predicting trends intime series data. There are many applications data includingtemperature, stock prices depicted in the financial market, gene expression data analysis, etc. In general, there are threetypes of periodic patterns which can be detected in the timeseries data: (1) symbol periodicity, (2) sequence periodicityor partial periodic patterns, and (3) segment or full-cycleperiodicity. Rasheed et al. have proposed a two-phasesapproach to periodicity mining. In the first phase, they usethe suffix tree to produce candidate period patterns of allthree types of periodicity in a single run. However, we findthat those suffix-tree-related data structures are stillinefficient in generating candidates of period patterns. Therefore, in this paper, we use the following method forperiodicity mining in time series databases. On the design ofPhase 1 for generation of candidate patterns, we present ourtime-position join method. From the simulation results, weshow that our method is more efficient than their algorithm.
{"title":"A Time-Position Join Method for Periodicity Mining in Time Series Databases","authors":"Chia-En Li, Ye-In Chang","doi":"10.1109/ICS.2016.0066","DOIUrl":"https://doi.org/10.1109/ICS.2016.0066","url":null,"abstract":"Periodicity mining is used for predicting trends intime series data. There are many applications data includingtemperature, stock prices depicted in the financial market, gene expression data analysis, etc. In general, there are threetypes of periodic patterns which can be detected in the timeseries data: (1) symbol periodicity, (2) sequence periodicityor partial periodic patterns, and (3) segment or full-cycleperiodicity. Rasheed et al. have proposed a two-phasesapproach to periodicity mining. In the first phase, they usethe suffix tree to produce candidate period patterns of allthree types of periodicity in a single run. However, we findthat those suffix-tree-related data structures are stillinefficient in generating candidates of period patterns. Therefore, in this paper, we use the following method forperiodicity mining in time series databases. On the design ofPhase 1 for generation of candidate patterns, we present ourtime-position join method. From the simulation results, weshow that our method is more efficient than their algorithm.","PeriodicalId":281088,"journal":{"name":"2016 International Computer Symposium (ICS)","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128752360","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}