With the increasing of university enrolment, the registered students are becoming more and more. Almost all the universities are in the state of saturation. This leads to a big trial for the strength of the building. In the process of using the teaching buildings, lights are used more frequently and massively. Meanwhile because of the poor management and the students' lack of saving up electricity, a lot of energy is wasted, at the same time resulting in the pollution for the environment. In order to keep pace with the idea of a low-carbon life, and to provide convenience for the students' study, this essay introduces a new light-control system with artificial intelligence. This system can control all the lights by knowing the distribution of the students, the light intensity, the total number of the students, even the number in each classroom. Each sensor connects with each other by Power Line Carrier. This can greatly reduce the waste of the energy and the cost of building this system.
{"title":"A College Teaching Building Lighting Control System Based on Power Line Carrier","authors":"Yanming Huo, Haochen Wang, X. Zuo, Zhimin Cui","doi":"10.1109/ISCC-C.2013.11","DOIUrl":"https://doi.org/10.1109/ISCC-C.2013.11","url":null,"abstract":"With the increasing of university enrolment, the registered students are becoming more and more. Almost all the universities are in the state of saturation. This leads to a big trial for the strength of the building. In the process of using the teaching buildings, lights are used more frequently and massively. Meanwhile because of the poor management and the students' lack of saving up electricity, a lot of energy is wasted, at the same time resulting in the pollution for the environment. In order to keep pace with the idea of a low-carbon life, and to provide convenience for the students' study, this essay introduces a new light-control system with artificial intelligence. This system can control all the lights by knowing the distribution of the students, the light intensity, the total number of the students, even the number in each classroom. Each sensor connects with each other by Power Line Carrier. This can greatly reduce the waste of the energy and the cost of building this system.","PeriodicalId":313511,"journal":{"name":"2013 International Conference on Information Science and Cloud Computing Companion","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2013-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128617954","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Because of its simplicity and effectiveness, collaborative filtering (CF) became one of the most successful recommendation algorithms. User-based CF is one classic method of CF algorithms. In order to solve the problem that common rating items are often too few to be used to effectively calculate the similarity of two users in user-based CF, we proposed an improved collaborative filtering model with item similarity called ISCF in this paper. In ISCF model, the similarity of items was considered in user-based collaborative filtering, which contributes to alleviate the problem of data sparsity and therefore calculate the similarity of user. Experimental results illustrate that our approach ISCF outperforms the average method and user-based CF. Compared with user-based CF, the average improvement in the percentage of ISCF at MAE and RMSE are 21.9% and 17.7%, respectively. In addition, our approach ISCF can predict more items than user-based CF, and the average improvement in the percentage of ISCF at prediction diversity is 33.86%.
{"title":"An Improved Collaborative Filtering Model Considering Item Similarity","authors":"Yefei Zha, Yuqing Zhai","doi":"10.1109/ISCC-C.2013.40","DOIUrl":"https://doi.org/10.1109/ISCC-C.2013.40","url":null,"abstract":"Because of its simplicity and effectiveness, collaborative filtering (CF) became one of the most successful recommendation algorithms. User-based CF is one classic method of CF algorithms. In order to solve the problem that common rating items are often too few to be used to effectively calculate the similarity of two users in user-based CF, we proposed an improved collaborative filtering model with item similarity called ISCF in this paper. In ISCF model, the similarity of items was considered in user-based collaborative filtering, which contributes to alleviate the problem of data sparsity and therefore calculate the similarity of user. Experimental results illustrate that our approach ISCF outperforms the average method and user-based CF. Compared with user-based CF, the average improvement in the percentage of ISCF at MAE and RMSE are 21.9% and 17.7%, respectively. In addition, our approach ISCF can predict more items than user-based CF, and the average improvement in the percentage of ISCF at prediction diversity is 33.86%.","PeriodicalId":313511,"journal":{"name":"2013 International Conference on Information Science and Cloud Computing Companion","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2013-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128724477","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Combustion experiments and chemical kinetics simulations generate huge data that is computationally and data intensive. A cloud-based cyber infrastructure known as Cloud Flame is implemented to improve the computational efficiency, scalability and availability of data for combustion research. The architecture consists of an application layer, a communication layer and distributed cloud servers running in a mix environment of Windows, Macintosh and Linux systems. The application layer runs software such as CHEMKIN modeling application. The communication layer provides secure transfer/archive of kinetic, thermodynamic, transport and gas surface data using private/public keys between clients and cloud servers. A robust XML schema based on the Process Informatics Model (Prime) combined with a workflow methodology for digitizing, verifying and uploading data from scientific graphs/tables to Prime is implemented for chemical molecular structures of compounds. The outcome of using this system by combustion researchers at King Abdullah University of Science and Technology (KAUST) Clean Combustion Research Center and its collaborating partners indicated a significant improvement in efficiency in terms of speed of chemical kinetics and accuracy in searching for the right chemical kinetic data.
{"title":"CloudFlame: Cyberinfrastructure for Combustion Research","authors":"G. Goteng, Naveena Nettyam, S. M. Sarathy","doi":"10.1109/ISCC-C.2013.57","DOIUrl":"https://doi.org/10.1109/ISCC-C.2013.57","url":null,"abstract":"Combustion experiments and chemical kinetics simulations generate huge data that is computationally and data intensive. A cloud-based cyber infrastructure known as Cloud Flame is implemented to improve the computational efficiency, scalability and availability of data for combustion research. The architecture consists of an application layer, a communication layer and distributed cloud servers running in a mix environment of Windows, Macintosh and Linux systems. The application layer runs software such as CHEMKIN modeling application. The communication layer provides secure transfer/archive of kinetic, thermodynamic, transport and gas surface data using private/public keys between clients and cloud servers. A robust XML schema based on the Process Informatics Model (Prime) combined with a workflow methodology for digitizing, verifying and uploading data from scientific graphs/tables to Prime is implemented for chemical molecular structures of compounds. The outcome of using this system by combustion researchers at King Abdullah University of Science and Technology (KAUST) Clean Combustion Research Center and its collaborating partners indicated a significant improvement in efficiency in terms of speed of chemical kinetics and accuracy in searching for the right chemical kinetic data.","PeriodicalId":313511,"journal":{"name":"2013 International Conference on Information Science and Cloud Computing Companion","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2013-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114587496","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, based on Chaudhury's fast O(1) bilateral filtering (FBF) and the shift-variant technique, we present a fast adaptive bilateral filtering (FABF) for sharpness enhancement and noise removal with good computational efficiency. FABF sharpens an image by increasing the slope of the edges without producing overshoot or undershoot. Compared with FBF, FABF-restored images are significantly sharper. Compared with adaptive bilateral filter (ABF), FABF shows a similar performance in terms of noise removal and sharpness enhancement, while the execution time of FABF is substantially shorter than that of ABF.
{"title":"Fast Adaptive Bilateral Filtering with Fixed Parameters for Sharpness Enhancement and Noise Reduction","authors":"Yuanzhong Shu, Ye Chen, Yannan Su","doi":"10.1109/ISCC-C.2013.73","DOIUrl":"https://doi.org/10.1109/ISCC-C.2013.73","url":null,"abstract":"In this paper, based on Chaudhury's fast O(1) bilateral filtering (FBF) and the shift-variant technique, we present a fast adaptive bilateral filtering (FABF) for sharpness enhancement and noise removal with good computational efficiency. FABF sharpens an image by increasing the slope of the edges without producing overshoot or undershoot. Compared with FBF, FABF-restored images are significantly sharper. Compared with adaptive bilateral filter (ABF), FABF shows a similar performance in terms of noise removal and sharpness enhancement, while the execution time of FABF is substantially shorter than that of ABF.","PeriodicalId":313511,"journal":{"name":"2013 International Conference on Information Science and Cloud Computing Companion","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2013-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116233601","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper proposes an analysis method of embedded and distributed software reliability modeling based on AADL. By using the key information in AADL structural model, AADL fault model is established, to describe the fault behavior of computer distributed software. On this basis, the reliability analysis is carried through the sensitivity analysis method. We analyze the reliability of the system, which can help designers to find out the key modules that affect the reliability of the system in the early stages of development and provides a strategic decision foundation to enhance the system reliability.
{"title":"The Research of Embedded Software Reliability Modeling Analysis Based on AADL","authors":"T. Chuan, Yujun Liu, Xin Li, Qingling Duan","doi":"10.1109/ISCC-C.2013.143","DOIUrl":"https://doi.org/10.1109/ISCC-C.2013.143","url":null,"abstract":"This paper proposes an analysis method of embedded and distributed software reliability modeling based on AADL. By using the key information in AADL structural model, AADL fault model is established, to describe the fault behavior of computer distributed software. On this basis, the reliability analysis is carried through the sensitivity analysis method. We analyze the reliability of the system, which can help designers to find out the key modules that affect the reliability of the system in the early stages of development and provides a strategic decision foundation to enhance the system reliability.","PeriodicalId":313511,"journal":{"name":"2013 International Conference on Information Science and Cloud Computing Companion","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2013-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114812312","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Location-based services (LBS) bring so much convenience to our daily life. However they have incurred serious location privacy problems. K-anonymity is the one of most popular privacy-preserving methods. The method relying on a trusted third party (TTP) might cause the TTP to become a performance bottleneck. So TTP-free protocols are proposed. But existing TTP-free protocols cannot resist attacks from multiple users colluding with a LBS provider. To solve the problems, this paper proposes a novel location privacy-preserving protocol. The protocol uses key agreement to construct the perturbations which can be removed on the whole. The perturbations are used to disguise real locations, meanwhile, they do not affect LBS service quality. With the help of homomorphic encryption, the LBS provider can compute the centroid of a companion set while it does not know the locations of the members in the set. The analysis shows that the protocol can resist location privacy attacks from insiders and outsiders, especially from multiple users colluding with the LBS provider. The protocol achieves high service quality while providing strong location privacy protection for LBS.
{"title":"A Location Privacy-Preserving Protocol Based on Homomorphic Encryption and Key Agreement","authors":"Xiao-ling Zhu, Yang Lu, Xiaojuan Zhu, Shuwei Qiu","doi":"10.1109/ISCC-C.2013.17","DOIUrl":"https://doi.org/10.1109/ISCC-C.2013.17","url":null,"abstract":"Location-based services (LBS) bring so much convenience to our daily life. However they have incurred serious location privacy problems. K-anonymity is the one of most popular privacy-preserving methods. The method relying on a trusted third party (TTP) might cause the TTP to become a performance bottleneck. So TTP-free protocols are proposed. But existing TTP-free protocols cannot resist attacks from multiple users colluding with a LBS provider. To solve the problems, this paper proposes a novel location privacy-preserving protocol. The protocol uses key agreement to construct the perturbations which can be removed on the whole. The perturbations are used to disguise real locations, meanwhile, they do not affect LBS service quality. With the help of homomorphic encryption, the LBS provider can compute the centroid of a companion set while it does not know the locations of the members in the set. The analysis shows that the protocol can resist location privacy attacks from insiders and outsiders, especially from multiple users colluding with the LBS provider. The protocol achieves high service quality while providing strong location privacy protection for LBS.","PeriodicalId":313511,"journal":{"name":"2013 International Conference on Information Science and Cloud Computing Companion","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2013-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125243895","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
At present, most of the identity based encryption (IBE) schemes from the lattice originate from the results of Gentry et.al., and regard each identity as a bit string with equal length and then map a matrix to every bit of identity string. Consequently, they are considerably less efficient. In our paper, we construct an IBE which is efficient from standard learning with errors problem and handles identities as a chunk for performance. In the standard model, our paper gives a proof which our IBE is IND-sID-CCA secure. Mean-while, we also show which our IBE construction is also IND-ID-CCA secure via the technique i.e., imposing additional restrictions on the identities, presented by Boyen and Boneh.
{"title":"Efficient Identity-Based Encryption from Lattice","authors":"Huiyan Chen, Dongmei Chen, Yanshuo Zhang","doi":"10.1109/ISCC-C.2013.67","DOIUrl":"https://doi.org/10.1109/ISCC-C.2013.67","url":null,"abstract":"At present, most of the identity based encryption (IBE) schemes from the lattice originate from the results of Gentry et.al., and regard each identity as a bit string with equal length and then map a matrix to every bit of identity string. Consequently, they are considerably less efficient. In our paper, we construct an IBE which is efficient from standard learning with errors problem and handles identities as a chunk for performance. In the standard model, our paper gives a proof which our IBE is IND-sID-CCA secure. Mean-while, we also show which our IBE construction is also IND-ID-CCA secure via the technique i.e., imposing additional restrictions on the identities, presented by Boyen and Boneh.","PeriodicalId":313511,"journal":{"name":"2013 International Conference on Information Science and Cloud Computing Companion","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2013-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123322321","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cognitive radio is now emerging as a popular technology to end the resource scarcity. Cognitive radio has four fundamental technologies that is spectrum sensing, resource allocation, disturbance avoidance and opportunistic spectrum access. Among them, spectrum sensing technology is the foremost. Blind sensing method attracts much attention for its easiness to implement and the few requirements for prior knowledge. The spectrum sensing method employing Information Theoretic Criteria (ITC) method is a blind sensing technology that can sense the activity of primary user with little prior information. However, it can not agilely set the false alarm probability which is usually required in communication systems. In this paper, a research on the effect of the adjustable parameter is carried out. The effectiveness of this method is set up by simulation in several scenarios with different numbers of sampling samples and receiving antennas. It is shown in the simulation that the adjustable parameter can change the false alarm probability on the basis of the requirement of the system and also the sensing method with an adjustable parameter can overcome the noise uncertainty.
{"title":"Research on the Effect of the Adjustable Parameter Applied to Information Theoretic Criteria Based Spectrum Sensing Method","authors":"Tingting Liu, Jian Zhang, Zhi-ming Wang","doi":"10.1109/ISCC-C.2013.116","DOIUrl":"https://doi.org/10.1109/ISCC-C.2013.116","url":null,"abstract":"Cognitive radio is now emerging as a popular technology to end the resource scarcity. Cognitive radio has four fundamental technologies that is spectrum sensing, resource allocation, disturbance avoidance and opportunistic spectrum access. Among them, spectrum sensing technology is the foremost. Blind sensing method attracts much attention for its easiness to implement and the few requirements for prior knowledge. The spectrum sensing method employing Information Theoretic Criteria (ITC) method is a blind sensing technology that can sense the activity of primary user with little prior information. However, it can not agilely set the false alarm probability which is usually required in communication systems. In this paper, a research on the effect of the adjustable parameter is carried out. The effectiveness of this method is set up by simulation in several scenarios with different numbers of sampling samples and receiving antennas. It is shown in the simulation that the adjustable parameter can change the false alarm probability on the basis of the requirement of the system and also the sensing method with an adjustable parameter can overcome the noise uncertainty.","PeriodicalId":313511,"journal":{"name":"2013 International Conference on Information Science and Cloud Computing Companion","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2013-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126450787","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
According to the observation of known facts, researcher finds all performances which are conform to the observation of cognitive to associate with form knowledge base, and then build an initial network model. Based on this model, we complete the mutual uncertainty constraint by using the new achieve probability and effect probability between evidences and adjacent nodes. And on this basis, using the drive method which satisfied competition threshold, keeping reasoning to the high level, until find an object conclusion meets the expectation. The confirming result makes the best assuming evidence become the centre of focus from competition. It decides the orientation of evidence which is priority to be chosen, and give a high efficiency solution for choosing deterministic target by rapid positioning and accurate selection.
{"title":"Research on Optimum Model for the Best Link of Expert System","authors":"Chen Guo, Jiaman Ma, Yuefan Liu","doi":"10.1109/ISCC-C.2013.113","DOIUrl":"https://doi.org/10.1109/ISCC-C.2013.113","url":null,"abstract":"According to the observation of known facts, researcher finds all performances which are conform to the observation of cognitive to associate with form knowledge base, and then build an initial network model. Based on this model, we complete the mutual uncertainty constraint by using the new achieve probability and effect probability between evidences and adjacent nodes. And on this basis, using the drive method which satisfied competition threshold, keeping reasoning to the high level, until find an object conclusion meets the expectation. The confirming result makes the best assuming evidence become the centre of focus from competition. It decides the orientation of evidence which is priority to be chosen, and give a high efficiency solution for choosing deterministic target by rapid positioning and accurate selection.","PeriodicalId":313511,"journal":{"name":"2013 International Conference on Information Science and Cloud Computing Companion","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2013-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131844802","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The inventory control model based on the multi-objective programming of the spare parts is set up. The algorithm of particle swarm optimization (PSO) is designed and improved to solve the inventory control multi-objective programming model. Aiming at the question of the particle that deviates from the solution space and the prematurity problem in the searching course, the regain mechanism and interference mechanism are built up to improve the classical PSO, and the switching mechanism from the Cartesian space to the discrete space of inventory control model is established, and then the optimized solution algorithm based on the improved PSO is presented. At last, the simulation experiments of inventory control are made to validate the multi-objective programming model. It lays the foundation for design and realization of simulation and optimization of inventory control of the spare parts.
{"title":"Design of the Multi-level Inventory Control Model and Solution Algorithm for the Spare Parts","authors":"Yu Cao, Tiening Wang, Shengliang Xu, Yu Zhu","doi":"10.1109/ISCC-C.2013.63","DOIUrl":"https://doi.org/10.1109/ISCC-C.2013.63","url":null,"abstract":"The inventory control model based on the multi-objective programming of the spare parts is set up. The algorithm of particle swarm optimization (PSO) is designed and improved to solve the inventory control multi-objective programming model. Aiming at the question of the particle that deviates from the solution space and the prematurity problem in the searching course, the regain mechanism and interference mechanism are built up to improve the classical PSO, and the switching mechanism from the Cartesian space to the discrete space of inventory control model is established, and then the optimized solution algorithm based on the improved PSO is presented. At last, the simulation experiments of inventory control are made to validate the multi-objective programming model. It lays the foundation for design and realization of simulation and optimization of inventory control of the spare parts.","PeriodicalId":313511,"journal":{"name":"2013 International Conference on Information Science and Cloud Computing Companion","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2013-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132102690","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}