As the increasing development of online social networks (OSNs), spammers' attentions have been attracted from the traditional email field. Nowadays, advertisements, deception messages, illegal contents are prevalent in all kinds of ONSs. They're propagated from one to another arbitrarily, polluting the Internet environment, and what's more, resulting in a great many of security problems. Some previous works have been proposed to detect spammers according to user properties. The problem is that in order to prevent from being detected, spammers are likely to pretend to be normal, and what's more, some normal users also engage into spam spreading for financial benefits, making detection more difficult. In this paper, we solve the detection problem from the view of user influence. The basic of our work is that since spammers pretend to be normal, their influences should keep step with their normal behaviors. But when a spam campaign is launched, usually in order to influent others, a great many of spammers engaged into propagation, the original poster's influence would get a sudden increase, making him outstanding from the others. In this way, we can distinguish the original spammers and supervise from the root of the propagation tree. Our work is experimented on real data gathered from Weibo and shows inspiring results.
{"title":"Mining Spam Accounts with User Influence","authors":"Kan Chen, Peidong Zhu, Yueshan Xiong","doi":"10.1109/ISCC-C.2013.85","DOIUrl":"https://doi.org/10.1109/ISCC-C.2013.85","url":null,"abstract":"As the increasing development of online social networks (OSNs), spammers' attentions have been attracted from the traditional email field. Nowadays, advertisements, deception messages, illegal contents are prevalent in all kinds of ONSs. They're propagated from one to another arbitrarily, polluting the Internet environment, and what's more, resulting in a great many of security problems. Some previous works have been proposed to detect spammers according to user properties. The problem is that in order to prevent from being detected, spammers are likely to pretend to be normal, and what's more, some normal users also engage into spam spreading for financial benefits, making detection more difficult. In this paper, we solve the detection problem from the view of user influence. The basic of our work is that since spammers pretend to be normal, their influences should keep step with their normal behaviors. But when a spam campaign is launched, usually in order to influent others, a great many of spammers engaged into propagation, the original poster's influence would get a sudden increase, making him outstanding from the others. In this way, we can distinguish the original spammers and supervise from the root of the propagation tree. Our work is experimented on real data gathered from Weibo and shows inspiring results.","PeriodicalId":313511,"journal":{"name":"2013 International Conference on Information Science and Cloud Computing Companion","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2013-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132424113","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper probes into how to improve Information Retrieval by changing the feature distribution of the text. It introduces Cloud Model theory into Latent Dirichlet Allocation(LDA) Model and build a new feature selection system. LDA Model is used to mine the underlying topical structure. Each topic is associated with a multinomial distribution over words which are semantic related. But there is doubt that themes are relevant with each other in the light of semantics. Based on LDA model presented probability distribution of vocabulary in text, the new system with Cloud Model theory can automatically simulate feature set whose contribution degree is high in the text. Results show this feature set has less features but higher classification accuracy, thus obviously better than currently popular feature selection methods. If the query is matched to words with high contribution degree, the more these words are, the more relevant the article searched out is with the query. NTCIR-5 (the 5th NII Test Collection for IR Systems) collections of Experiment on SLIR (Single Language IR) show that this method achieves an obvious improvement compared with some other methods in IR.
{"title":"New Features Acquisition of Text with Cloud-LDA Model","authors":"Maoyuan Zhang, Fanli He, Shui-Chin Chen","doi":"10.1109/ISCC-C.2013.94","DOIUrl":"https://doi.org/10.1109/ISCC-C.2013.94","url":null,"abstract":"This paper probes into how to improve Information Retrieval by changing the feature distribution of the text. It introduces Cloud Model theory into Latent Dirichlet Allocation(LDA) Model and build a new feature selection system. LDA Model is used to mine the underlying topical structure. Each topic is associated with a multinomial distribution over words which are semantic related. But there is doubt that themes are relevant with each other in the light of semantics. Based on LDA model presented probability distribution of vocabulary in text, the new system with Cloud Model theory can automatically simulate feature set whose contribution degree is high in the text. Results show this feature set has less features but higher classification accuracy, thus obviously better than currently popular feature selection methods. If the query is matched to words with high contribution degree, the more these words are, the more relevant the article searched out is with the query. NTCIR-5 (the 5th NII Test Collection for IR Systems) collections of Experiment on SLIR (Single Language IR) show that this method achieves an obvious improvement compared with some other methods in IR.","PeriodicalId":313511,"journal":{"name":"2013 International Conference on Information Science and Cloud Computing Companion","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2013-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132814965","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rumors are unavoidable in social networks and spreading through social networks. In this paper, we introduce three roles, which are rumor maker, accomplice and innocent, into social networks. Rumor makers are people who distribute rumors in social networks. In order to propagate rumor as more people as possible, rumor maker cannot achieve this goal without others' help. So that accomplices are people who help rumor maker spread rumors. Innocent are people who do not share help for rumors spreading. We introduce an extended cellular automata algorithm, i.e. Hadoop Cellular Automata (HCA), to simulate people's tweeting activity in social networks. We deploy our experiments on an open source platform, Hadoop. Our result shows that Hadoop Cellular Automata is a liner algorithm. Also HCA can simulate people's tweeting activity very well and can pick out rumors easily in social networks.
{"title":"Hadoop Cellular Automata for Identifying Rumor in Social Networks","authors":"Hui Zhang, Ji Li, Yueliang Xiao","doi":"10.1109/ISCC-C.2013.76","DOIUrl":"https://doi.org/10.1109/ISCC-C.2013.76","url":null,"abstract":"Rumors are unavoidable in social networks and spreading through social networks. In this paper, we introduce three roles, which are rumor maker, accomplice and innocent, into social networks. Rumor makers are people who distribute rumors in social networks. In order to propagate rumor as more people as possible, rumor maker cannot achieve this goal without others' help. So that accomplices are people who help rumor maker spread rumors. Innocent are people who do not share help for rumors spreading. We introduce an extended cellular automata algorithm, i.e. Hadoop Cellular Automata (HCA), to simulate people's tweeting activity in social networks. We deploy our experiments on an open source platform, Hadoop. Our result shows that Hadoop Cellular Automata is a liner algorithm. Also HCA can simulate people's tweeting activity very well and can pick out rumors easily in social networks.","PeriodicalId":313511,"journal":{"name":"2013 International Conference on Information Science and Cloud Computing Companion","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2013-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133248389","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
First of all, the concept of cloud computing and the present situation of management information system are to be introduced. This is followed by an analysis of the impact of cloud computing on MIS from the MIS development, operation and maintenance, as well as data security aspects. The conclusion is that the MIS development mode comprised of mash up, development methodology oriented workflow, operation and maintenance is simple and service-oriented with on-demand billing, minimalized resource cost, and constrained data security. This paper can guide enterprises to carry out MIS strategy planning and construction in the cloud, improving the efficiency of MIS development and the probability of success, so as to enhance the core competitiveness of enterprises.
{"title":"Analysis on the Impact of Cloud Computing for Management Information System","authors":"Xiaojing Wang, Conglin Ran","doi":"10.1109/ISCC-C.2013.46","DOIUrl":"https://doi.org/10.1109/ISCC-C.2013.46","url":null,"abstract":"First of all, the concept of cloud computing and the present situation of management information system are to be introduced. This is followed by an analysis of the impact of cloud computing on MIS from the MIS development, operation and maintenance, as well as data security aspects. The conclusion is that the MIS development mode comprised of mash up, development methodology oriented workflow, operation and maintenance is simple and service-oriented with on-demand billing, minimalized resource cost, and constrained data security. This paper can guide enterprises to carry out MIS strategy planning and construction in the cloud, improving the efficiency of MIS development and the probability of success, so as to enhance the core competitiveness of enterprises.","PeriodicalId":313511,"journal":{"name":"2013 International Conference on Information Science and Cloud Computing Companion","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2013-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116226781","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper a scheme named multi-use conditional proxy re-encryption based on ECC is proposed first, which can be used in the following email service scenarios: Alice can delegate her decryption rights to Bob for an encrypted mails sent to her, which contains a specific word and satisfies the condition set on the proxy by Alice, and Bob also can delegate the right to others just like Alice through the proxy, as the condition set by him is satisfied, and so on if required. Furthermore, as an important part for fulfilling our main goal, a new method to generate the partial re-encryption keys is proposed, and the security of our scheme is proven secure against chosen cipher text attacks based on the random oracle model.
{"title":"Multi-Use Conditional Proxy Re-encryption","authors":"L. Mo, Guoxiang Yao","doi":"10.1109/ISCC-C.2013.90","DOIUrl":"https://doi.org/10.1109/ISCC-C.2013.90","url":null,"abstract":"In this paper a scheme named multi-use conditional proxy re-encryption based on ECC is proposed first, which can be used in the following email service scenarios: Alice can delegate her decryption rights to Bob for an encrypted mails sent to her, which contains a specific word and satisfies the condition set on the proxy by Alice, and Bob also can delegate the right to others just like Alice through the proxy, as the condition set by him is satisfied, and so on if required. Furthermore, as an important part for fulfilling our main goal, a new method to generate the partial re-encryption keys is proposed, and the security of our scheme is proven secure against chosen cipher text attacks based on the random oracle model.","PeriodicalId":313511,"journal":{"name":"2013 International Conference on Information Science and Cloud Computing Companion","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2013-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122096090","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Large enterprises tend to have heterogeneous systems, namely the formation of "islands of information", that makes information interaction and interoperability very difficult. Application of integrated development based on service-oriented architecture, which does not change the underlying architecture of enterprise applications, is a good solution to the aforementioned problems. This paper proposes a scheme to integrate heterogeneous resources based on SOA and Web Service, and builds inter-departmental business systems to conduct feasibility analysis and to verity theoretical basis with interoperability, cross-industry interconnection and data sharing as benchmarks.
{"title":"Application of SOA and Web Service in Implementing Heterogeneous System Integration","authors":"Yilan Yang","doi":"10.1109/ISCC-C.2013.50","DOIUrl":"https://doi.org/10.1109/ISCC-C.2013.50","url":null,"abstract":"Large enterprises tend to have heterogeneous systems, namely the formation of \"islands of information\", that makes information interaction and interoperability very difficult. Application of integrated development based on service-oriented architecture, which does not change the underlying architecture of enterprise applications, is a good solution to the aforementioned problems. This paper proposes a scheme to integrate heterogeneous resources based on SOA and Web Service, and builds inter-departmental business systems to conduct feasibility analysis and to verity theoretical basis with interoperability, cross-industry interconnection and data sharing as benchmarks.","PeriodicalId":313511,"journal":{"name":"2013 International Conference on Information Science and Cloud Computing Companion","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2013-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122112680","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we propose a novel approach for multiframe super-resolution reconstruction by incorporating non-local prior in the maximum a posteriori (MAP) formulation. This prior expresses that recovered images tend to exhibit repetitive structures. A great deal of computation is required in the original non-local prior algorithm dealing with the huge amount of weight calculations. Techniques of weight symmetry, moving averaging filter, limited search window are adopted to speed up non-local filter. Meanwhile, Non-Linear Conjugated Gradient (NLCG) method is introduced to solve simultaneously the high-resolution (HR) image of optimization process and non-local prior adapted to the HR image. Experimental results on extensive synthetic and realistic images demonstrate the superiority of the proposed algorithm to representative algorithms both quantitatively and qualitatively.
{"title":"Super-Resolution Employing an Efficient Nonlocal Prior","authors":"Shuai Chen, Bin Chen, Yi-bao He","doi":"10.1109/ISCC-C.2013.131","DOIUrl":"https://doi.org/10.1109/ISCC-C.2013.131","url":null,"abstract":"In this paper, we propose a novel approach for multiframe super-resolution reconstruction by incorporating non-local prior in the maximum a posteriori (MAP) formulation. This prior expresses that recovered images tend to exhibit repetitive structures. A great deal of computation is required in the original non-local prior algorithm dealing with the huge amount of weight calculations. Techniques of weight symmetry, moving averaging filter, limited search window are adopted to speed up non-local filter. Meanwhile, Non-Linear Conjugated Gradient (NLCG) method is introduced to solve simultaneously the high-resolution (HR) image of optimization process and non-local prior adapted to the HR image. Experimental results on extensive synthetic and realistic images demonstrate the superiority of the proposed algorithm to representative algorithms both quantitatively and qualitatively.","PeriodicalId":313511,"journal":{"name":"2013 International Conference on Information Science and Cloud Computing Companion","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2013-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125450089","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recently, evolutionary algorithms have been successful to solve many optimization problems. However, their performance will deteriorate when applied to complex high-dimensional problems. A clustering-cooperative coevolution scheme was introduced into DE algorithm to tackle the high-dimensional problems. In the scheme, the clustering method has been employed to decompose the problem, which works well with the cooperative coevolution. The proposed algorithm is evaluated by MPB and CEC09 benchmark functions with expanded dimension. The results are very promising, which show clearly that our proposed algorithm is effective for dynamic high-dimensional optimization problems.
{"title":"Differential Evolution with Clustering Cooperative Coevolution for High-Dimensional Problems","authors":"Shuzhen Wan","doi":"10.1109/ISCC-C.2013.64","DOIUrl":"https://doi.org/10.1109/ISCC-C.2013.64","url":null,"abstract":"Recently, evolutionary algorithms have been successful to solve many optimization problems. However, their performance will deteriorate when applied to complex high-dimensional problems. A clustering-cooperative coevolution scheme was introduced into DE algorithm to tackle the high-dimensional problems. In the scheme, the clustering method has been employed to decompose the problem, which works well with the cooperative coevolution. The proposed algorithm is evaluated by MPB and CEC09 benchmark functions with expanded dimension. The results are very promising, which show clearly that our proposed algorithm is effective for dynamic high-dimensional optimization problems.","PeriodicalId":313511,"journal":{"name":"2013 International Conference on Information Science and Cloud Computing Companion","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2013-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124130087","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Self-modifying code poses potential problems in binary translation. When the original source code had written by itself, the translated code block from source code must be retranslated. Self-modifying code must be accurately emulated by the runtime. To improve translation efficiency of self-modifying code, this paper design and realize a new policy named ASCMS for self-modifying code cache management. The ASCMS provides a precise positioning to a translated block, not to a trace or the whole code cache. Through the simulation experiments, The ASCMS has 3.95 times increase to self-modifying code in binary translation.
{"title":"ASCMS: An Accurate Self-Modifying Code Cache Management Strategy in Binary Translation","authors":"Anzhan Liu, Wenqi Wang","doi":"10.1109/ISCC-C.2013.52","DOIUrl":"https://doi.org/10.1109/ISCC-C.2013.52","url":null,"abstract":"Self-modifying code poses potential problems in binary translation. When the original source code had written by itself, the translated code block from source code must be retranslated. Self-modifying code must be accurately emulated by the runtime. To improve translation efficiency of self-modifying code, this paper design and realize a new policy named ASCMS for self-modifying code cache management. The ASCMS provides a precise positioning to a translated block, not to a trace or the whole code cache. Through the simulation experiments, The ASCMS has 3.95 times increase to self-modifying code in binary translation.","PeriodicalId":313511,"journal":{"name":"2013 International Conference on Information Science and Cloud Computing Companion","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2013-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127126513","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shuang Wang, Bing-liang Hu, Xiaokun Dong, Xing Yan
The traditional regularized super-resolution (SR) algorithms can reconstruct the high-resolution (HR) image to some extent. But the high frequency information of the image will lose seriously and the edges and details will become blurred. This paper presents an improved regularized SR algorithm. Firstly, a new interpolation algorithm is used to obtain the initial value of the HR image. Secondly, the trilateral filter is adopted as the regularization term to preserve the edge and details. Finally, the steepest descent method is taken as the iterative algorithm to gain the optimum solution. Simulated experiments are presented including the comparison with some existing reconstruction algorithms. Those results show that the proposed algorithm performs better than others. Furthermore, the edges and details of the image are well preserved.
{"title":"An Improved Super-Resolution Reconstruction Algorithm Based on Regularization","authors":"Shuang Wang, Bing-liang Hu, Xiaokun Dong, Xing Yan","doi":"10.1109/ISCC-C.2013.44","DOIUrl":"https://doi.org/10.1109/ISCC-C.2013.44","url":null,"abstract":"The traditional regularized super-resolution (SR) algorithms can reconstruct the high-resolution (HR) image to some extent. But the high frequency information of the image will lose seriously and the edges and details will become blurred. This paper presents an improved regularized SR algorithm. Firstly, a new interpolation algorithm is used to obtain the initial value of the HR image. Secondly, the trilateral filter is adopted as the regularization term to preserve the edge and details. Finally, the steepest descent method is taken as the iterative algorithm to gain the optimum solution. Simulated experiments are presented including the comparison with some existing reconstruction algorithms. Those results show that the proposed algorithm performs better than others. Furthermore, the edges and details of the image are well preserved.","PeriodicalId":313511,"journal":{"name":"2013 International Conference on Information Science and Cloud Computing Companion","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2013-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127339924","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}