Data Release 12 is the final data release of the SDSS-III, containing all SDSS observations. The massive spectra can not only be used for research of the structure and evolution of the Galaxy but also for multi-waveband identification. In addition, the spectra are a ideal sample for data mining for rare and special objects like white dwarf main-sequence star. WDMS consists of a white dwarf primary and a low-mass main-sequence companion which has positive significance to the study of evolution and parameters of close binaries. In this paper, after feature extraction by PCA, an clustering approach is proposed based on the idea that cluster centers are characterized by a higher density than their neighbors and by a relatively large distance from points with higher densities. A total number of 2,340 WDMS candidates are selected by the method and some of them are new discoveries which prove that our approach of finding special celestial bodies in massive spectra data is feasible.
{"title":"Data Mining for WDMS in SDSS DR12 Archive","authors":"Jiang Bin, Chunyu Ma, W. Wenyu, Wang Wei, Gao Jun","doi":"10.1109/ICISCE.2016.70","DOIUrl":"https://doi.org/10.1109/ICISCE.2016.70","url":null,"abstract":"Data Release 12 is the final data release of the SDSS-III, containing all SDSS observations. The massive spectra can not only be used for research of the structure and evolution of the Galaxy but also for multi-waveband identification. In addition, the spectra are a ideal sample for data mining for rare and special objects like white dwarf main-sequence star. WDMS consists of a white dwarf primary and a low-mass main-sequence companion which has positive significance to the study of evolution and parameters of close binaries. In this paper, after feature extraction by PCA, an clustering approach is proposed based on the idea that cluster centers are characterized by a higher density than their neighbors and by a relatively large distance from points with higher densities. A total number of 2,340 WDMS candidates are selected by the method and some of them are new discoveries which prove that our approach of finding special celestial bodies in massive spectra data is feasible.","PeriodicalId":6882,"journal":{"name":"2016 3rd International Conference on Information Science and Control Engineering (ICISCE)","volume":"39 1","pages":"285-288"},"PeriodicalIF":0.0,"publicationDate":"2016-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87279247","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xueyang Ma, Hongli Zhang, Xiangzhan Yu, Yingjun Li
The Web has become a large platform for information publishing and consuming. Web news and blog are both representative information sources providing convenient ways to keep informed. In addition to the main content, most web pages also contain navigation panels, advertisements, recommended articles etc. Effectively extracting news and blog content and filtering these noises is necessary and challenging. In this paper we propose a news and blog content extraction approach that is portable to different languages and various domains. Our extensive case studies shows that characters which are not anchor texts but contain stop words are more likely to be the genuine content. Our method first traverses the entire DOM tree and count these valid characters attached to each DOM node. Then we step into the most representative child node based on valid characters recursively. And we finally stop at the main content node with a predefined criterion. To validate the approach, we conduct experiments by using online news and blog files randomly selected from well-known Chinese and English websites. Experimental result shows that our method achieves 96% F1-measure on average and outperforms CETR.
{"title":"A Template Independent Approach for Web News and Blog Content Extraction","authors":"Xueyang Ma, Hongli Zhang, Xiangzhan Yu, Yingjun Li","doi":"10.1109/ICISCE.2016.36","DOIUrl":"https://doi.org/10.1109/ICISCE.2016.36","url":null,"abstract":"The Web has become a large platform for information publishing and consuming. Web news and blog are both representative information sources providing convenient ways to keep informed. In addition to the main content, most web pages also contain navigation panels, advertisements, recommended articles etc. Effectively extracting news and blog content and filtering these noises is necessary and challenging. In this paper we propose a news and blog content extraction approach that is portable to different languages and various domains. Our extensive case studies shows that characters which are not anchor texts but contain stop words are more likely to be the genuine content. Our method first traverses the entire DOM tree and count these valid characters attached to each DOM node. Then we step into the most representative child node based on valid characters recursively. And we finally stop at the main content node with a predefined criterion. To validate the approach, we conduct experiments by using online news and blog files randomly selected from well-known Chinese and English websites. Experimental result shows that our method achieves 96% F1-measure on average and outperforms CETR.","PeriodicalId":6882,"journal":{"name":"2016 3rd International Conference on Information Science and Control Engineering (ICISCE)","volume":"53 1","pages":"120-125"},"PeriodicalIF":0.0,"publicationDate":"2016-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81274676","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Using as few sensor nodes as possible for detecting composite event effectively in large area is a difficult problem, because the composite event contains multiple atomic events which needs many different types of heterogeneous nodes for cooperative monitoring, and the coverage quality would be worse if there are not enough sensor nodes. Most of the traditional methods are focusing on atomic event detection which only needs one type of homogeneous node. Considering the temporal and spatial association, costs and sensing capability of different types of heterogeneous sensor nodes, in this paper, we propose a novel composite event coverage problem with the purpose of minimizing deployment costs subjecting to the constraint of achieving a certain coverage quality, and give a mathematical model for this optimal problem. Then, we propose an exact algorithm and a greedy strategy approximation algorithm to solve this optimization problem. The experimental results and analysis show the performance of the proposed algorithms.
{"title":"Deployment Cost Optimal for Composite Event Detection in Heterogeneous Wireless Sensor Networks","authors":"Xiaoqing Dong","doi":"10.1109/ICISCE.2016.275","DOIUrl":"https://doi.org/10.1109/ICISCE.2016.275","url":null,"abstract":"Using as few sensor nodes as possible for detecting composite event effectively in large area is a difficult problem, because the composite event contains multiple atomic events which needs many different types of heterogeneous nodes for cooperative monitoring, and the coverage quality would be worse if there are not enough sensor nodes. Most of the traditional methods are focusing on atomic event detection which only needs one type of homogeneous node. Considering the temporal and spatial association, costs and sensing capability of different types of heterogeneous sensor nodes, in this paper, we propose a novel composite event coverage problem with the purpose of minimizing deployment costs subjecting to the constraint of achieving a certain coverage quality, and give a mathematical model for this optimal problem. Then, we propose an exact algorithm and a greedy strategy approximation algorithm to solve this optimization problem. The experimental results and analysis show the performance of the proposed algorithms.","PeriodicalId":6882,"journal":{"name":"2016 3rd International Conference on Information Science and Control Engineering (ICISCE)","volume":"25 1","pages":"1288-1292"},"PeriodicalIF":0.0,"publicationDate":"2016-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87404892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Objective: Using the target classification is too complex be used to distinguish between natural objects and Man-made objects in aerial image, so in this paper, we analyze the target classification and feature of the target in the sea instead. According to the features of the contour map, a method for eliminating natural objects is proposed. Methods: First, we decompose the contour map through perspective transformation, to establish the estimate of the true image, second, the Fourier-mellin transformation is adopted to match the outline of estimate object image with the outline of true object image (matching), and then we use Hausdorff distance to correct the preliminary matching, Finally, we adopt the difference method to remove the natural objects. Results: The experimental results show that the method can effectively remove the natural objects and keep the Man-made objects in the observed image. Conclusion: In this study, we propose a new method of removing the natural objects based on the known contour information of perspective transformation and It has been proved by the experimental results that this method is feasible for target detection by reducing the difficulty of target detection and improving the efficiency of target classification.
{"title":"Removing Natural Objects from the Sea Surface Background Image Based on Contour Map and Local-Hausdorff Distance","authors":"Li Chonglun","doi":"10.1109/ICISCE.2016.118","DOIUrl":"https://doi.org/10.1109/ICISCE.2016.118","url":null,"abstract":"Objective: Using the target classification is too complex be used to distinguish between natural objects and Man-made objects in aerial image, so in this paper, we analyze the target classification and feature of the target in the sea instead. According to the features of the contour map, a method for eliminating natural objects is proposed. Methods: First, we decompose the contour map through perspective transformation, to establish the estimate of the true image, second, the Fourier-mellin transformation is adopted to match the outline of estimate object image with the outline of true object image (matching), and then we use Hausdorff distance to correct the preliminary matching, Finally, we adopt the difference method to remove the natural objects. Results: The experimental results show that the method can effectively remove the natural objects and keep the Man-made objects in the observed image. Conclusion: In this study, we propose a new method of removing the natural objects based on the known contour information of perspective transformation and It has been proved by the experimental results that this method is feasible for target detection by reducing the difficulty of target detection and improving the efficiency of target classification.","PeriodicalId":6882,"journal":{"name":"2016 3rd International Conference on Information Science and Control Engineering (ICISCE)","volume":"85 5 1","pages":"519-526"},"PeriodicalIF":0.0,"publicationDate":"2016-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89344695","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fang Gao, Zhangqin Huang, Shulong Wang, Xinrong Ji
Because of the complex architecture and multiple iterations algorithm, neural network is sometimes hard for traditional embedded devices to meet the needs of real-time processing speed in large scale data applications. Manycore processors are directly applicable for parallel implementation of the neural network. In this paper we present a multilayer perception feed forward acceleration framework based on power efficiency manycore processor, including network mapping strategy, data structure design and inter-core communication method. The framework is implemented on a Zynq and Epiphany combined hardware platform with OpenCL. The experimental results show that in a concrete example of character recognition, the framework with Epiphany achieves about four times feed forward acceleration than the dual-core ARM in Zynq with same prediction accuracy level.
{"title":"A Manycore Processor Based Multilayer Perceptron Feedforward Acceleration Framework for Embedded System","authors":"Fang Gao, Zhangqin Huang, Shulong Wang, Xinrong Ji","doi":"10.1109/ICISCE.2016.21","DOIUrl":"https://doi.org/10.1109/ICISCE.2016.21","url":null,"abstract":"Because of the complex architecture and multiple iterations algorithm, neural network is sometimes hard for traditional embedded devices to meet the needs of real-time processing speed in large scale data applications. Manycore processors are directly applicable for parallel implementation of the neural network. In this paper we present a multilayer perception feed forward acceleration framework based on power efficiency manycore processor, including network mapping strategy, data structure design and inter-core communication method. The framework is implemented on a Zynq and Epiphany combined hardware platform with OpenCL. The experimental results show that in a concrete example of character recognition, the framework with Epiphany achieves about four times feed forward acceleration than the dual-core ARM in Zynq with same prediction accuracy level.","PeriodicalId":6882,"journal":{"name":"2016 3rd International Conference on Information Science and Control Engineering (ICISCE)","volume":"7 1","pages":"49-53"},"PeriodicalIF":0.0,"publicationDate":"2016-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89543425","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In order to realize the synchronous operation of the stage elevators, a multi-motor synchronous control scheme based on the combination of non-smooth control and state observer is proposed. For the difference among the parameters of the motor systems, and the other factors such as parameters' perturbation and outside interference, the stage elevators may not synchronize. Therefore, in order to solve the problem of the stage elevators' performance, a kind of non-smooth control method is proposed, which the multi-motor synchronization could be achieved in a finite time. At the same time, in order to monitor the acceleration, which is difficult to monitor, a state observer is developed to estimate the acceleration of the motor. Finally, the simulation results of the non-smooth control strategy and the traditional PI control strategy have been comparative analysis. And the results show that the non-smooth control strategy has stronger robustness.
{"title":"Design of Non-smooth Synchronous Control Method for Stage Lifting Machinery","authors":"Jianhui Wang, Qing Wang, Li Zhang","doi":"10.1109/ICISCE.2016.205","DOIUrl":"https://doi.org/10.1109/ICISCE.2016.205","url":null,"abstract":"In order to realize the synchronous operation of the stage elevators, a multi-motor synchronous control scheme based on the combination of non-smooth control and state observer is proposed. For the difference among the parameters of the motor systems, and the other factors such as parameters' perturbation and outside interference, the stage elevators may not synchronize. Therefore, in order to solve the problem of the stage elevators' performance, a kind of non-smooth control method is proposed, which the multi-motor synchronization could be achieved in a finite time. At the same time, in order to monitor the acceleration, which is difficult to monitor, a state observer is developed to estimate the acceleration of the motor. Finally, the simulation results of the non-smooth control strategy and the traditional PI control strategy have been comparative analysis. And the results show that the non-smooth control strategy has stronger robustness.","PeriodicalId":6882,"journal":{"name":"2016 3rd International Conference on Information Science and Control Engineering (ICISCE)","volume":"104 1","pages":"943-947"},"PeriodicalIF":0.0,"publicationDate":"2016-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75979467","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Huang Zhanpeng, Zhang Qi, Jiang Shizhong, C. Guohua
The accurate medical image segmentation is the basis of 3D visualization and diagnosis. A medical CT image segmentation algorithm is proposed based on watershed segmentation and regions merging to extract the liver area. The similarity criteria for the regions merging is calculated by automatically analyzing the grayscale distribution of nearby areas of the seed points selected by the user. Then, the Gaussian filter is used to smooth the CT image. And the gradient of the smoothed image is calculated by the multi-scale morphological gradient, which is the input image for Watershed segmentation. The result of the Watershed segmentation is a labeled image, and the labeled regions are merged based on the similarity criteria. Finally the region of liver is extracted by selecting the max region, and the holes in the liver area are filled, which are the vessel areas of the liver. Experimental results show that the algorithm can accurately extract the liver region in the image with little user involvement.
{"title":"Medical Image Segmentation Based on the Watersheds and Regions Merging","authors":"Huang Zhanpeng, Zhang Qi, Jiang Shizhong, C. Guohua","doi":"10.1109/ICISCE.2016.218","DOIUrl":"https://doi.org/10.1109/ICISCE.2016.218","url":null,"abstract":"The accurate medical image segmentation is the basis of 3D visualization and diagnosis. A medical CT image segmentation algorithm is proposed based on watershed segmentation and regions merging to extract the liver area. The similarity criteria for the regions merging is calculated by automatically analyzing the grayscale distribution of nearby areas of the seed points selected by the user. Then, the Gaussian filter is used to smooth the CT image. And the gradient of the smoothed image is calculated by the multi-scale morphological gradient, which is the input image for Watershed segmentation. The result of the Watershed segmentation is a labeled image, and the labeled regions are merged based on the similarity criteria. Finally the region of liver is extracted by selecting the max region, and the holes in the liver area are filled, which are the vessel areas of the liver. Experimental results show that the algorithm can accurately extract the liver region in the image with little user involvement.","PeriodicalId":6882,"journal":{"name":"2016 3rd International Conference on Information Science and Control Engineering (ICISCE)","volume":"235 1","pages":"1011-1014"},"PeriodicalIF":0.0,"publicationDate":"2016-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85099742","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In [10] and [16], we proposed tools for simultaneous variable selection and parameter estimation in a functional linear model with a functional outcome and a large number of scalar predictor. We call these techniques Function-on-Scalar Lasso (FSL) and Adaptive Function-on-Scalar Lasso(AFSL). A scalar group lasso was used to fit the FSL and AFSL estimates. While this approach works well, we improve it by producing custom ADMM methods which are specifically designed for functional data. We propose this new framework as a computational tool for finding FSL estimates. Through our numerical studies, we demonstrate the computational improvement of our methodology.
{"title":"High-Dimensional Function-on-Scale Regression via the Alternating Direction Method of Multipliers","authors":"Zhaohu Fan, M. Reimherr","doi":"10.1109/ICISCE.2016.93","DOIUrl":"https://doi.org/10.1109/ICISCE.2016.93","url":null,"abstract":"In [10] and [16], we proposed tools for simultaneous variable selection and parameter estimation in a functional linear model with a functional outcome and a large number of scalar predictor. We call these techniques Function-on-Scalar Lasso (FSL) and Adaptive Function-on-Scalar Lasso(AFSL). A scalar group lasso was used to fit the FSL and AFSL estimates. While this approach works well, we improve it by producing custom ADMM methods which are specifically designed for functional data. We propose this new framework as a computational tool for finding FSL estimates. Through our numerical studies, we demonstrate the computational improvement of our methodology.","PeriodicalId":6882,"journal":{"name":"2016 3rd International Conference on Information Science and Control Engineering (ICISCE)","volume":"29 1","pages":"397-399"},"PeriodicalIF":0.0,"publicationDate":"2016-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88285466","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the improvement of processor and SDRAM performance, the performance of SDRAM controller becomes the bottleneck of the system performance. In this paper, a Tow-Level Buffered SDRAM controller is proposed, and its design and verification are described. To some extent, the controller improves the throughput of the processor for the SDRAM memory, and provides a solution for the design of high performance system.
{"title":"A Tow-Level Buffered SDRAM Controller","authors":"T. Jin, Wenxin Li, Xiangyu Hu","doi":"10.1109/ICISCE.2016.37","DOIUrl":"https://doi.org/10.1109/ICISCE.2016.37","url":null,"abstract":"With the improvement of processor and SDRAM performance, the performance of SDRAM controller becomes the bottleneck of the system performance. In this paper, a Tow-Level Buffered SDRAM controller is proposed, and its design and verification are described. To some extent, the controller improves the throughput of the processor for the SDRAM memory, and provides a solution for the design of high performance system.","PeriodicalId":6882,"journal":{"name":"2016 3rd International Conference on Information Science and Control Engineering (ICISCE)","volume":"1 1","pages":"126-128"},"PeriodicalIF":0.0,"publicationDate":"2016-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91315752","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Liu Yinghui, Wang Qingning, Zhang Donghui, Sun Xiangde, Shen Yang, Xu Xianglian
Nowadays, the backwardness of the power automation management in our country causes the loss of a lot of energy. In order to improve the situation, an anti-stealing mathematical model is introduced in this paper. Firstly, ten factors are selected to build the indictor evaluation system, data mining is used to process lots of the electricity data. Then, a mathematical model based on BP neural network is built for analyzing the customer consumption behavior. With the model, the suspicion coefficient of electricity stealing can be calculated, and the credit rating of power consumer is also classified. In this paper, some typical companies are selected to verify the electricity anti-stealing model, and a conclusion that a feasible idea for the electricity stealing problem is drawn.
{"title":"Research and Application of Electricity Anti-stealing System Based on Neural Network","authors":"Liu Yinghui, Wang Qingning, Zhang Donghui, Sun Xiangde, Shen Yang, Xu Xianglian","doi":"10.1109/ICISCE.2016.224","DOIUrl":"https://doi.org/10.1109/ICISCE.2016.224","url":null,"abstract":"Nowadays, the backwardness of the power automation management in our country causes the loss of a lot of energy. In order to improve the situation, an anti-stealing mathematical model is introduced in this paper. Firstly, ten factors are selected to build the indictor evaluation system, data mining is used to process lots of the electricity data. Then, a mathematical model based on BP neural network is built for analyzing the customer consumption behavior. With the model, the suspicion coefficient of electricity stealing can be calculated, and the credit rating of power consumer is also classified. In this paper, some typical companies are selected to verify the electricity anti-stealing model, and a conclusion that a feasible idea for the electricity stealing problem is drawn.","PeriodicalId":6882,"journal":{"name":"2016 3rd International Conference on Information Science and Control Engineering (ICISCE)","volume":"118 1","pages":"1039-1043"},"PeriodicalIF":0.0,"publicationDate":"2016-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89724869","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}