Pub Date : 2012-06-22DOI: 10.1109/ICSESS.2012.6269531
Wanliang Fu
The paper makes an analysis for the requirement of the multimedia technology in today's information environment. A common process for systematic framework, general organization and the mining process for the multimedia database are introduced. Besides, a description of the traits for the different kinds of the media that can be mined is made and a basic method of the multi-media data mining is provided. A discussion has been made on the future challenges of the multi-media data-mining technology.
{"title":"Multi-media data mining technology for the systematic framework","authors":"Wanliang Fu","doi":"10.1109/ICSESS.2012.6269531","DOIUrl":"https://doi.org/10.1109/ICSESS.2012.6269531","url":null,"abstract":"The paper makes an analysis for the requirement of the multimedia technology in today's information environment. A common process for systematic framework, general organization and the mining process for the multimedia database are introduced. Besides, a description of the traits for the different kinds of the media that can be mined is made and a basic method of the multi-media data mining is provided. A discussion has been made on the future challenges of the multi-media data-mining technology.","PeriodicalId":205738,"journal":{"name":"2012 IEEE International Conference on Computer Science and Automation Engineering","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130797764","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-06-22DOI: 10.1109/ICSESS.2012.6269579
M. Šajna
Information quality (IQ) in the World Wide Web is stagnant; thus there is a constant need for IQ assessment techniques. Our research works towards a model, which would employ community-driven personalization with the help of trust metrics. This paper presents results from a series of experiments, which have proven the feasibility and suitability of two transitive trust algorithms for our research. Furthermore, we have benchmarked their effectiveness in the test environment.
{"title":"Testing transitive trust algorithms for use in information quality assessment","authors":"M. Šajna","doi":"10.1109/ICSESS.2012.6269579","DOIUrl":"https://doi.org/10.1109/ICSESS.2012.6269579","url":null,"abstract":"Information quality (IQ) in the World Wide Web is stagnant; thus there is a constant need for IQ assessment techniques. Our research works towards a model, which would employ community-driven personalization with the help of trust metrics. This paper presents results from a series of experiments, which have proven the feasibility and suitability of two transitive trust algorithms for our research. Furthermore, we have benchmarked their effectiveness in the test environment.","PeriodicalId":205738,"journal":{"name":"2012 IEEE International Conference on Computer Science and Automation Engineering","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126703021","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-06-22DOI: 10.1109/ICSESS.2012.6269428
Wang Jie, Zeng Yu
Weighted frequent pattern mining is suggested to discover more important frequent pattern by considering different weights of each item, and closed frequent pattern mining can reduces the number of frequent patterns and keep sufficient result information. In this paper, we propose an efficient algorithm DS_CWFP to mine closed weighted frequent pattern mining over data streams. We present an efficient algorithm based on sliding window and can discover closed weighted frequent pattern from the recent data. A new efficient DS_CWFP data structure is used to dynamically maintain the information of transactions and also maintain the closed weighted frequent patterns has been found in the current sliding window. Three optimization strategies are present. The detail of the algorithm DS_CWFP is also discussed. Experimental studies are performed to evaluate the good effectiveness of DS_CWFP.
{"title":"An efficient algorithm for mining closed weighted frequent pattern over data streams","authors":"Wang Jie, Zeng Yu","doi":"10.1109/ICSESS.2012.6269428","DOIUrl":"https://doi.org/10.1109/ICSESS.2012.6269428","url":null,"abstract":"Weighted frequent pattern mining is suggested to discover more important frequent pattern by considering different weights of each item, and closed frequent pattern mining can reduces the number of frequent patterns and keep sufficient result information. In this paper, we propose an efficient algorithm DS_CWFP to mine closed weighted frequent pattern mining over data streams. We present an efficient algorithm based on sliding window and can discover closed weighted frequent pattern from the recent data. A new efficient DS_CWFP data structure is used to dynamically maintain the information of transactions and also maintain the closed weighted frequent patterns has been found in the current sliding window. Three optimization strategies are present. The detail of the algorithm DS_CWFP is also discussed. Experimental studies are performed to evaluate the good effectiveness of DS_CWFP.","PeriodicalId":205738,"journal":{"name":"2012 IEEE International Conference on Computer Science and Automation Engineering","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126913519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-06-22DOI: 10.1109/ICSESS.2012.6269430
Guo Xiangping, Jin Ying, Hu Ronglin, Chen Liqing
In the paper, we provide a novel routing algorithm (ANRP), which is based on Ant Colony System. The aim of the ANRP is to solve the problem of limited energy for wireless sensor network routing process. It is able to achieve better load balance and prolong the network lifetime. ANRP combines the ratio of the residual energy and the total energy with the distance of any two nodes, which is used as the control factor of the decision probability. Furthermore, in the incremental equation of pheromone, it also considers data packets length and the hop-size between source node and destination node. In this way, NARP can search the optimize path from the source node to destination node and balances the energy consumption for WSNs. Simulation results demonstrate that NARP has better performance than AODV.
{"title":"Wireless sensor networks routing protocol based on ant colony algorithm","authors":"Guo Xiangping, Jin Ying, Hu Ronglin, Chen Liqing","doi":"10.1109/ICSESS.2012.6269430","DOIUrl":"https://doi.org/10.1109/ICSESS.2012.6269430","url":null,"abstract":"In the paper, we provide a novel routing algorithm (ANRP), which is based on Ant Colony System. The aim of the ANRP is to solve the problem of limited energy for wireless sensor network routing process. It is able to achieve better load balance and prolong the network lifetime. ANRP combines the ratio of the residual energy and the total energy with the distance of any two nodes, which is used as the control factor of the decision probability. Furthermore, in the incremental equation of pheromone, it also considers data packets length and the hop-size between source node and destination node. In this way, NARP can search the optimize path from the source node to destination node and balances the energy consumption for WSNs. Simulation results demonstrate that NARP has better performance than AODV.","PeriodicalId":205738,"journal":{"name":"2012 IEEE International Conference on Computer Science and Automation Engineering","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123347357","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-06-22DOI: 10.1109/ICSESS.2012.6269518
Zhi Wang, Jizhong Zhao
The current UPnP service discovery algorithm can cause severe drops with response message rate when applied in pervasive computing environments with large scale of services. The deficiency lies in the instantaneous transmission of all response messages and the independent random selection of the transmission delay so that the response messages collide and congest. To reduce the congestion, a refined algorithm is developed which is based on unchanged the root device's discovery protocol, control point using multiple adaptive maximum query time (MX), to ensure that devices in the environment can be discovered. The simulation verifies that the refined algorithm gives better performance in the response message's drop ratio.
{"title":"Improved algorithm for UPnP discovery in smart space","authors":"Zhi Wang, Jizhong Zhao","doi":"10.1109/ICSESS.2012.6269518","DOIUrl":"https://doi.org/10.1109/ICSESS.2012.6269518","url":null,"abstract":"The current UPnP service discovery algorithm can cause severe drops with response message rate when applied in pervasive computing environments with large scale of services. The deficiency lies in the instantaneous transmission of all response messages and the independent random selection of the transmission delay so that the response messages collide and congest. To reduce the congestion, a refined algorithm is developed which is based on unchanged the root device's discovery protocol, control point using multiple adaptive maximum query time (MX), to ensure that devices in the environment can be discovered. The simulation verifies that the refined algorithm gives better performance in the response message's drop ratio.","PeriodicalId":205738,"journal":{"name":"2012 IEEE International Conference on Computer Science and Automation Engineering","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122335759","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-06-22DOI: 10.1109/ICSESS.2012.6269463
Yongjuan Pan, Jin Wang
Based on the research on embedded camera user's attention level, a novel blind image quality ranking and poor appearance detection algorithm for low lighting environment are proposed for the first time. Different from the traditional algorithms, the proposed one mixed model composed of detecting lowlight, grading blur degree and noise level, and figuring out the under- or over-exposure. It reaches the real time performance even on low level embedded devices with the detail feedback of quality score. The experiments show the proposed method performs well and is consistent with human perception.
{"title":"Embedded lowlight image quality measure","authors":"Yongjuan Pan, Jin Wang","doi":"10.1109/ICSESS.2012.6269463","DOIUrl":"https://doi.org/10.1109/ICSESS.2012.6269463","url":null,"abstract":"Based on the research on embedded camera user's attention level, a novel blind image quality ranking and poor appearance detection algorithm for low lighting environment are proposed for the first time. Different from the traditional algorithms, the proposed one mixed model composed of detecting lowlight, grading blur degree and noise level, and figuring out the under- or over-exposure. It reaches the real time performance even on low level embedded devices with the detail feedback of quality score. The experiments show the proposed method performs well and is consistent with human perception.","PeriodicalId":205738,"journal":{"name":"2012 IEEE International Conference on Computer Science and Automation Engineering","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126602895","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-06-22DOI: 10.1109/ICSESS.2012.6269474
Canyu Wang, Xuebi Guo, Hao Han
We make efforts to help the investigator discover the hidden conspirators. In the criminal cases, the investigators or the police have to make full use of the messages or spoken documents data that they record in files. Thus, mining the latent information from messages is vital to them. In Information Retrieval area, Latent Semantic Analysis (LSA) is an important method for query matching which can discover the underlying semantic relation or similarity between words and topics. We introduce a network hierarchical structure to analyze the original message network, making the analysis conveniently as well as ensuring the connectivity of the inner network connection of all the conspirators. For this purpose, we use LSA to measure the similarities between topics and Crime Prototype Vector, and the similarities will be used as the weights of the paths in the network hierarchies and calculate the suspicious degrees.
{"title":"Crime detection using Latent Semantic Analysis and hierarchical structure","authors":"Canyu Wang, Xuebi Guo, Hao Han","doi":"10.1109/ICSESS.2012.6269474","DOIUrl":"https://doi.org/10.1109/ICSESS.2012.6269474","url":null,"abstract":"We make efforts to help the investigator discover the hidden conspirators. In the criminal cases, the investigators or the police have to make full use of the messages or spoken documents data that they record in files. Thus, mining the latent information from messages is vital to them. In Information Retrieval area, Latent Semantic Analysis (LSA) is an important method for query matching which can discover the underlying semantic relation or similarity between words and topics. We introduce a network hierarchical structure to analyze the original message network, making the analysis conveniently as well as ensuring the connectivity of the inner network connection of all the conspirators. For this purpose, we use LSA to measure the similarities between topics and Crime Prototype Vector, and the similarities will be used as the weights of the paths in the network hierarchies and calculate the suspicious degrees.","PeriodicalId":205738,"journal":{"name":"2012 IEEE International Conference on Computer Science and Automation Engineering","volume":"70 17","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120888723","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As an important feature of Web 2.0, mashup technology can improve data value by integrating data and services. Google Map provides a series of services of map data provided , processed ,published, making application of Google Map technology conform the development trend of the times. In this paper, Google Map is as a platform combined with a reasonable top-k ranking method which are able to build a application mode that can process map data, provide user a reasonable order according to the need and provide reasonable choose suggestion.
{"title":"Research of data mashup based on Google Map and top-k ranking","authors":"Wenjiang Hu, Fuxin Zhao, Yongbing Gao, Tingting Zhao","doi":"10.1109/ICSESS.2012.6269486","DOIUrl":"https://doi.org/10.1109/ICSESS.2012.6269486","url":null,"abstract":"As an important feature of Web 2.0, mashup technology can improve data value by integrating data and services. Google Map provides a series of services of map data provided , processed ,published, making application of Google Map technology conform the development trend of the times. In this paper, Google Map is as a platform combined with a reasonable top-k ranking method which are able to build a application mode that can process map data, provide user a reasonable order according to the need and provide reasonable choose suggestion.","PeriodicalId":205738,"journal":{"name":"2012 IEEE International Conference on Computer Science and Automation Engineering","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116077239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-06-22DOI: 10.1109/ICSESS.2012.6269478
Jialiang Lv, Huanqing Cui, Ming Yang
Localization is one the key technologies of wireless sensor networks, and the problem of localization is always formulate as an optimization problem. Particle swarm optimization (PSO) is easy to implement and requires moderate computing resources, which is feasible for localization of sensor network. To improve the efficiency and precision of PSO-based localization methods, this paper proposes a distributed PSO-based method. Based on the probabilistic distribution of ranging error, it presents a new objective function to evaluate the fitness of particles. Moreover, it tries to localize as many unknown nodes as possible in a more accurate search space. Simulation results show that the proposed method outperforms previous proposed algorithms.
{"title":"Distribute localization for wireless sensor networks using particle swarm optimization","authors":"Jialiang Lv, Huanqing Cui, Ming Yang","doi":"10.1109/ICSESS.2012.6269478","DOIUrl":"https://doi.org/10.1109/ICSESS.2012.6269478","url":null,"abstract":"Localization is one the key technologies of wireless sensor networks, and the problem of localization is always formulate as an optimization problem. Particle swarm optimization (PSO) is easy to implement and requires moderate computing resources, which is feasible for localization of sensor network. To improve the efficiency and precision of PSO-based localization methods, this paper proposes a distributed PSO-based method. Based on the probabilistic distribution of ranging error, it presents a new objective function to evaluate the fitness of particles. Moreover, it tries to localize as many unknown nodes as possible in a more accurate search space. Simulation results show that the proposed method outperforms previous proposed algorithms.","PeriodicalId":205738,"journal":{"name":"2012 IEEE International Conference on Computer Science and Automation Engineering","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121708047","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-06-22DOI: 10.1109/ICSESS.2012.6269503
Zhenbin Yan, Wenzheng Li
Load-balancing is an important technology for improving performance of distributed system. We are based on the analysis of distributed process communication protocol, then we design a load balancing solution for the automation deployment and monitoring system , in order to solve the big communication load problem when the control center directly send messages to the large number of clients and the bottleneck that the number of client controlled by center cannot beyond the upper limit of the machine port, we design a resource scheduling strategy and puts forward the concept of communication layer proxy, then using two methods the subnet mask algorithm and consistency hash algorithm to realize the communication layer proxy. Through the experimental results show that the plan effective solve this two questions and with a good load uniformity and a little damage of remapping.
{"title":"Research of a scheduling and load balancing scheme based on large-scale distributed systems","authors":"Zhenbin Yan, Wenzheng Li","doi":"10.1109/ICSESS.2012.6269503","DOIUrl":"https://doi.org/10.1109/ICSESS.2012.6269503","url":null,"abstract":"Load-balancing is an important technology for improving performance of distributed system. We are based on the analysis of distributed process communication protocol, then we design a load balancing solution for the automation deployment and monitoring system , in order to solve the big communication load problem when the control center directly send messages to the large number of clients and the bottleneck that the number of client controlled by center cannot beyond the upper limit of the machine port, we design a resource scheduling strategy and puts forward the concept of communication layer proxy, then using two methods the subnet mask algorithm and consistency hash algorithm to realize the communication layer proxy. Through the experimental results show that the plan effective solve this two questions and with a good load uniformity and a little damage of remapping.","PeriodicalId":205738,"journal":{"name":"2012 IEEE International Conference on Computer Science and Automation Engineering","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116717021","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}