Yanhong Zhao, Hongqi Li, Liping Zhu, Fengqi Tan, Ying Wang
With the rapid growth of knowledge resources, the production departments in the field of oil and gas exploration and development produce daily a great volume of result documents. Meanwhile, a large part of knowledge is stored in the experts' brain as experience. How to spend less effort finding knowledge meeting users' need and how to make effective use of the expertise to avoid knowledge loss become more and more important. This paper adopts a knowledge model which is composed by process model and ontology model in the subject of Well Site Deployment. In this knowledge model, the process model provides the detailed operation flow and data flow, the ontology model provides the evaluating standards and the operating standards. We build a web-based knowledge service platform based on this knowledge model, through which knowledge can be shared between experts and non-experts. Furthermore, users can reuse the knowledge and trace the existing work results of well site deployment and development by the platform. All of these can help the final users to improve the efficiency of decision making.
{"title":"A Process-Oriented Ontology-Based Knowledge Model","authors":"Yanhong Zhao, Hongqi Li, Liping Zhu, Fengqi Tan, Ying Wang","doi":"10.1109/ISCC-C.2013.29","DOIUrl":"https://doi.org/10.1109/ISCC-C.2013.29","url":null,"abstract":"With the rapid growth of knowledge resources, the production departments in the field of oil and gas exploration and development produce daily a great volume of result documents. Meanwhile, a large part of knowledge is stored in the experts' brain as experience. How to spend less effort finding knowledge meeting users' need and how to make effective use of the expertise to avoid knowledge loss become more and more important. This paper adopts a knowledge model which is composed by process model and ontology model in the subject of Well Site Deployment. In this knowledge model, the process model provides the detailed operation flow and data flow, the ontology model provides the evaluating standards and the operating standards. We build a web-based knowledge service platform based on this knowledge model, through which knowledge can be shared between experts and non-experts. Furthermore, users can reuse the knowledge and trace the existing work results of well site deployment and development by the platform. All of these can help the final users to improve the efficiency of decision making.","PeriodicalId":313511,"journal":{"name":"2013 International Conference on Information Science and Cloud Computing Companion","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2013-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123588753","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
How to improve the utilization of IT facilities is a major problem for enterprises which have many real-time control systems. We think that improving or rebuilding legacy applications using cloud computing ideology is more suitable than building new cloud platform for these traditional real-time companies. In this paper, we propose federate cloud (FC) architecture for large group company having many subsidiaries with the similar real-time applications based on centric model. In the FC architecture, a sub cloud is constructed for applications in each subsidiary and all the sub clouds are connected together by cloud bus. We discuss the detailed mechanism for the FC architecture, including construction of the FC component, the real-time cloud storage strategies and cloud service scheduling algorithm. Experiment results show that our method can improve the utilization of IT facilities effectively.
{"title":"Towards Real-Time Federate Cloud for Large Group Company","authors":"Lixin Du, Wei He","doi":"10.1109/ISCC-C.2013.148","DOIUrl":"https://doi.org/10.1109/ISCC-C.2013.148","url":null,"abstract":"How to improve the utilization of IT facilities is a major problem for enterprises which have many real-time control systems. We think that improving or rebuilding legacy applications using cloud computing ideology is more suitable than building new cloud platform for these traditional real-time companies. In this paper, we propose federate cloud (FC) architecture for large group company having many subsidiaries with the similar real-time applications based on centric model. In the FC architecture, a sub cloud is constructed for applications in each subsidiary and all the sub clouds are connected together by cloud bus. We discuss the detailed mechanism for the FC architecture, including construction of the FC component, the real-time cloud storage strategies and cloud service scheduling algorithm. Experiment results show that our method can improve the utilization of IT facilities effectively.","PeriodicalId":313511,"journal":{"name":"2013 International Conference on Information Science and Cloud Computing Companion","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2013-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115354238","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Based on the complexity in military information systems architecture design, we proposed the concepts of architecture optimization design. After analyzed application of logical data meta-model (DM2) in building architecture data and products, we built a framework of architecture optimization design. Combined with the building sequence and designing guidelines of architecture data and products, we proposed architecture core data optimization design process. After analyzing the main contents of taking architecture optimization design, the goals and the guidelines of building mathematical models of architecture core data optimization design are put forward. Finally, we took the optimization design of activity data as an example, built the corresponding mathematical model, and illustrated relative optimization method. Architecture core data optimization design method affords a realizable approach of making architecture design solutions more quantitatively, scientifically, and automatically.
{"title":"Method of Architecture Core Data Optimization Design Based on DM2","authors":"Xiaoxue Zhang, Ai-min Luo, Xueshan Luo","doi":"10.1109/ISCC-C.2013.84","DOIUrl":"https://doi.org/10.1109/ISCC-C.2013.84","url":null,"abstract":"Based on the complexity in military information systems architecture design, we proposed the concepts of architecture optimization design. After analyzed application of logical data meta-model (DM2) in building architecture data and products, we built a framework of architecture optimization design. Combined with the building sequence and designing guidelines of architecture data and products, we proposed architecture core data optimization design process. After analyzing the main contents of taking architecture optimization design, the goals and the guidelines of building mathematical models of architecture core data optimization design are put forward. Finally, we took the optimization design of activity data as an example, built the corresponding mathematical model, and illustrated relative optimization method. Architecture core data optimization design method affords a realizable approach of making architecture design solutions more quantitatively, scientifically, and automatically.","PeriodicalId":313511,"journal":{"name":"2013 International Conference on Information Science and Cloud Computing Companion","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2013-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115381188","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In order to improve the learning mechanism of BFNNs, the paper firstly analyzes the failure mode of BFNNs trained by SBALR, which takes the form of a local cycle. And then by mean of the sensitivity theory, a disturbance learning algorithm is developed to make the BFNNs that suffering from learning failure to escape the local cycle. The new algorithm aims to keep the existing learning performance as much as possible. Experimental results demonstrate the effectiveness of the new algorithm on both learning effect and learning efficiency.
{"title":"Analyzing on the Failure Mode of BFNNs' Learning and its Improving Algorithm","authors":"Shuiming Zhong, Yinghua Lv, Tinghuai Ma, Yu Xue","doi":"10.1109/ISCC-C.2013.47","DOIUrl":"https://doi.org/10.1109/ISCC-C.2013.47","url":null,"abstract":"In order to improve the learning mechanism of BFNNs, the paper firstly analyzes the failure mode of BFNNs trained by SBALR, which takes the form of a local cycle. And then by mean of the sensitivity theory, a disturbance learning algorithm is developed to make the BFNNs that suffering from learning failure to escape the local cycle. The new algorithm aims to keep the existing learning performance as much as possible. Experimental results demonstrate the effectiveness of the new algorithm on both learning effect and learning efficiency.","PeriodicalId":313511,"journal":{"name":"2013 International Conference on Information Science and Cloud Computing Companion","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2013-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123306197","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Anomaly detection is an active branch of intrusion detection technology which can detect intrusion behaviors including system or users' non-normal behavior and unauthorized use of computer resources. Clustering analysis is an unsupervised method to group data set into multiple clusters. Using clustering algorithm to detect anomaly behavior has good scalability and adaptability. This paper mainly focuses on improving k-means clustering algorithm, and uses it to detect the abnormal records. Our goal is to increase the DR value and decrease the FAR value in anomaly detection by calculating appropriate value of parameters and improve the clustering algorithm. In our IE&FSDM algorithm, we use network records' minimum standard information entropy to compute the initial cluster centers. In testing phase, discrepancy metric is introduced to help calculate exact number of clusters in testing data set. Using the results of initial cluster centers calculated in the pre-phase, IE&FSDM compute the actual clusters by converging cluster centers and obtains the actual cluster centers according to the frequency sensitive discrepancy metric. Then comply with the improved k-means algorithm, iterative calculate until divide all network data into corresponding clusters, and according to the results of cluster we can classify the normal and abnormal network behaviors. At last, we use KDD CUP1999 dataset to implement IE&FSDM algorithm. Test results show that comparing with previous clustering methods, IE&FSDM algorithm improve the detection rate of anomaly behavior and reduce the false alarm rate.
{"title":"Research of Clustering Algorithm Based on Information Entropy and Frequency Sensitive Discrepancy Metric in Anomaly Detection","authors":"Han Li, Qiuxin Wu","doi":"10.1109/ISCC-C.2013.108","DOIUrl":"https://doi.org/10.1109/ISCC-C.2013.108","url":null,"abstract":"Anomaly detection is an active branch of intrusion detection technology which can detect intrusion behaviors including system or users' non-normal behavior and unauthorized use of computer resources. Clustering analysis is an unsupervised method to group data set into multiple clusters. Using clustering algorithm to detect anomaly behavior has good scalability and adaptability. This paper mainly focuses on improving k-means clustering algorithm, and uses it to detect the abnormal records. Our goal is to increase the DR value and decrease the FAR value in anomaly detection by calculating appropriate value of parameters and improve the clustering algorithm. In our IE&FSDM algorithm, we use network records' minimum standard information entropy to compute the initial cluster centers. In testing phase, discrepancy metric is introduced to help calculate exact number of clusters in testing data set. Using the results of initial cluster centers calculated in the pre-phase, IE&FSDM compute the actual clusters by converging cluster centers and obtains the actual cluster centers according to the frequency sensitive discrepancy metric. Then comply with the improved k-means algorithm, iterative calculate until divide all network data into corresponding clusters, and according to the results of cluster we can classify the normal and abnormal network behaviors. At last, we use KDD CUP1999 dataset to implement IE&FSDM algorithm. Test results show that comparing with previous clustering methods, IE&FSDM algorithm improve the detection rate of anomaly behavior and reduce the false alarm rate.","PeriodicalId":313511,"journal":{"name":"2013 International Conference on Information Science and Cloud Computing Companion","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2013-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130005694","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
By taking railway industry in China for example, SUPER-SBM DEA model of existing uncontrollable factors are adopted to analyze and evaluate the operation performance in 30 provinces in China. Based on detailed analysis of railway industry operation features, optimization of related slack variables is analyzed. Suggestions are put forward as follows: current railway transportation capacity should be reasonably used, input factors should be reasonably collocated, input capital should be accumulated through various channels.
{"title":"Operation Performance Evaluation and Optimization Based on SUPER-SBM DEA Model in Railway Industry in China","authors":"Z. Li","doi":"10.1109/ISCC-C.2013.96","DOIUrl":"https://doi.org/10.1109/ISCC-C.2013.96","url":null,"abstract":"By taking railway industry in China for example, SUPER-SBM DEA model of existing uncontrollable factors are adopted to analyze and evaluate the operation performance in 30 provinces in China. Based on detailed analysis of railway industry operation features, optimization of related slack variables is analyzed. Suggestions are put forward as follows: current railway transportation capacity should be reasonably used, input factors should be reasonably collocated, input capital should be accumulated through various channels.","PeriodicalId":313511,"journal":{"name":"2013 International Conference on Information Science and Cloud Computing Companion","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2013-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127882664","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
T. Hang, Guoren Yang, Bo Yu, Xuesong Liang, Ying Tang
The present paper introduces a neural network based on approach for solving the generalized eigenvalue problem Ax = λBx, where n-by-n matrices A and B are realvalued, B is non-singular, and 1 B A - is an orthogonal matrix whose determinant is equal to 1. The approach can extract the modulus largest and the modulus smallest eigenvalues, and the corresponding n-dimensional complex eigenvectors can be extracted by using the proposed algorithm that is essentially based on an ordinary differential equation of order n. Experimental results demonstrated the effectiveness of the proposed algorithm.
{"title":"Neural Network Based Algorithm for Generalized Eigenvalue Problem","authors":"T. Hang, Guoren Yang, Bo Yu, Xuesong Liang, Ying Tang","doi":"10.1109/ISCC-C.2013.93","DOIUrl":"https://doi.org/10.1109/ISCC-C.2013.93","url":null,"abstract":"The present paper introduces a neural network based on approach for solving the generalized eigenvalue problem Ax = λBx, where n-by-n matrices A and B are realvalued, B is non-singular, and 1 B A - is an orthogonal matrix whose determinant is equal to 1. The approach can extract the modulus largest and the modulus smallest eigenvalues, and the corresponding n-dimensional complex eigenvectors can be extracted by using the proposed algorithm that is essentially based on an ordinary differential equation of order n. Experimental results demonstrated the effectiveness of the proposed algorithm.","PeriodicalId":313511,"journal":{"name":"2013 International Conference on Information Science and Cloud Computing Companion","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2013-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124570213","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
To investigate the performance of different evolutionary algorithms on walking gait optimization, we designed an optimization framework. There are four bio-inspired methods in the framework, which include Genetic Algorithm (GA), Covariance Matrix Adaption Evolution Strategy (CMA-ES), Particle Swarm Optimization (PSO) and Differential Evolution (DE). In the learning process of each method, we employed three learning tasks to optimize the walking gait, which are aiming at generating a gait with higher speed, stability and flexibility respectively. We analyzed the gaits optimized by each four methods separately. According to the comparison of these results, it indicates that DE performs better than the other three algorithms. The comparison also shows that the gaits learned by CMA-ES and PSO are acceptable, but there exist drawbacks compared to DE. And among these methods, GA presents weak performance on gait optimization.
{"title":"Performance Comparisons of Evolutionary Algorithms for Walking Gait Optimization","authors":"C. Cai, Hong Jiang","doi":"10.1109/ISCC-C.2013.100","DOIUrl":"https://doi.org/10.1109/ISCC-C.2013.100","url":null,"abstract":"To investigate the performance of different evolutionary algorithms on walking gait optimization, we designed an optimization framework. There are four bio-inspired methods in the framework, which include Genetic Algorithm (GA), Covariance Matrix Adaption Evolution Strategy (CMA-ES), Particle Swarm Optimization (PSO) and Differential Evolution (DE). In the learning process of each method, we employed three learning tasks to optimize the walking gait, which are aiming at generating a gait with higher speed, stability and flexibility respectively. We analyzed the gaits optimized by each four methods separately. According to the comparison of these results, it indicates that DE performs better than the other three algorithms. The comparison also shows that the gaits learned by CMA-ES and PSO are acceptable, but there exist drawbacks compared to DE. And among these methods, GA presents weak performance on gait optimization.","PeriodicalId":313511,"journal":{"name":"2013 International Conference on Information Science and Cloud Computing Companion","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2013-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117031166","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the social network application, Popularity, the researchers can benefit through social network analysis, but it raises serious privacy concerns for the individual involved in social network. Some techniques have been proposed for protecting personal privacy. However, the existing methods tend to focus on un-weighted social network for anonymizing nodes and structure information or weighted social networks for anonymizing edge weight. We propose an edge vector perturbation method to preserve structural properties and edge weights for weighted social networks. First, we construct edge vector or edge space of the original weighted social network. Second, we calculate the edge betweenness and assign weights to elements in edge vector. Third, we construct release candidate set by the weighted Euclidean distance. We leverage the notions of edge vector and edge space in weighted social network. Given a social network G^s, we adopt two methods to build original edge vector E_Vec (G^s), and then select from some edge vectors from ψ(K_n)as publication candidate set of E_Vec(G^s). To ensure the effectiveness of released dataset, we use Euclidean distance between the vectors as metrics of the similarity. We execute experiments on datasets to study publication utility and quality. Our method can be applied to a typical perturbation algorithm to achieve better preservation of the utility of its output.
{"title":"Preserving Social Network Privacy Using Edge Vector Perturbation","authors":"Lihui Lan, Lijun Tian","doi":"10.1109/ISCC-C.2013.103","DOIUrl":"https://doi.org/10.1109/ISCC-C.2013.103","url":null,"abstract":"With the social network application, Popularity, the researchers can benefit through social network analysis, but it raises serious privacy concerns for the individual involved in social network. Some techniques have been proposed for protecting personal privacy. However, the existing methods tend to focus on un-weighted social network for anonymizing nodes and structure information or weighted social networks for anonymizing edge weight. We propose an edge vector perturbation method to preserve structural properties and edge weights for weighted social networks. First, we construct edge vector or edge space of the original weighted social network. Second, we calculate the edge betweenness and assign weights to elements in edge vector. Third, we construct release candidate set by the weighted Euclidean distance. We leverage the notions of edge vector and edge space in weighted social network. Given a social network G^s, we adopt two methods to build original edge vector E_Vec (G^s), and then select from some edge vectors from ψ(K_n)as publication candidate set of E_Vec(G^s). To ensure the effectiveness of released dataset, we use Euclidean distance between the vectors as metrics of the similarity. We execute experiments on datasets to study publication utility and quality. Our method can be applied to a typical perturbation algorithm to achieve better preservation of the utility of its output.","PeriodicalId":313511,"journal":{"name":"2013 International Conference on Information Science and Cloud Computing Companion","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2013-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116194049","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the rapid development of high-tech such as the cloud technology, information security has become more critical than before, so the cryptography assunes a key technology in information security. Recently, some new cryptography theories have attracted increasing attention under the background of research on algorithm efficiency and security has become the current hot research topic. Chaotic algorithm is very suitable for stream cipher encryption not only for its sensitivity to initial conditions for time series generated but also for its complex structure which is difficult to analyze and forecast. At the same time, it can provide smart pseudo random sequence with excellent randomness, correlation and complexity. This paper mainly studies about the image encryption algorithm based on chaotic theory.
{"title":"Study on Image Encryption Algorithm Based on Chaotic Theory","authors":"Qiuxia Zhang","doi":"10.1109/ISCC-C.2013.129","DOIUrl":"https://doi.org/10.1109/ISCC-C.2013.129","url":null,"abstract":"With the rapid development of high-tech such as the cloud technology, information security has become more critical than before, so the cryptography assunes a key technology in information security. Recently, some new cryptography theories have attracted increasing attention under the background of research on algorithm efficiency and security has become the current hot research topic. Chaotic algorithm is very suitable for stream cipher encryption not only for its sensitivity to initial conditions for time series generated but also for its complex structure which is difficult to analyze and forecast. At the same time, it can provide smart pseudo random sequence with excellent randomness, correlation and complexity. This paper mainly studies about the image encryption algorithm based on chaotic theory.","PeriodicalId":313511,"journal":{"name":"2013 International Conference on Information Science and Cloud Computing Companion","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2013-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126606512","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}