Pub Date : 2015-06-18DOI: 10.1109/ECS.2015.7125053
K. Nivetha, D. Saraswady
The deployment of large-scale biometric systems in both commercial and government applications has served to increase the public's awareness of this technology. This dramatic growth in biometric system has clearly highlighted the challenges associated in designing and integrating these systems. `Multimodal biometrics' is development to great importance wherein the information from three different biometric sources namely finger print, retina, finger vein is used for authentication system. Unlike unibiometric systems, these are sensitive to noise and make spoofing difficult for hackers. As an deployment of multimodal biometric, this project aims to dynamically ensure the performance to provide an enhanced level of security by combining Finger vein, Retina and Fingerprint with Hyper Image Encryption Algorithm (HIEA). Hyper image encryption algorithm is applied to the biometric template and only the transformed template is stored in the database based on secret key in which increases GAR and reduces FAR.
{"title":"Enhancing security for multimodal biometric using Hyper Image Encryption Algorithm","authors":"K. Nivetha, D. Saraswady","doi":"10.1109/ECS.2015.7125053","DOIUrl":"https://doi.org/10.1109/ECS.2015.7125053","url":null,"abstract":"The deployment of large-scale biometric systems in both commercial and government applications has served to increase the public's awareness of this technology. This dramatic growth in biometric system has clearly highlighted the challenges associated in designing and integrating these systems. `Multimodal biometrics' is development to great importance wherein the information from three different biometric sources namely finger print, retina, finger vein is used for authentication system. Unlike unibiometric systems, these are sensitive to noise and make spoofing difficult for hackers. As an deployment of multimodal biometric, this project aims to dynamically ensure the performance to provide an enhanced level of security by combining Finger vein, Retina and Fingerprint with Hyper Image Encryption Algorithm (HIEA). Hyper image encryption algorithm is applied to the biometric template and only the transformed template is stored in the database based on secret key in which increases GAR and reduces FAR.","PeriodicalId":202856,"journal":{"name":"2015 2nd International Conference on Electronics and Communication Systems (ICECS)","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123093441","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-06-18DOI: 10.1109/ECS.2015.7124976
Bishwas Mishra, S. Fernandes, K. Abhishek, A. Alva, Chaithra Shetty, Chandan V. Ajila, Dhanush Shetty, Harshitha A. Rao, P. Shetty
Facial expression is a way of non-verbal communication. A person depicts his/her feelings through facial expressions. In computer systems facial expressions help in verification, identification and authentication. One popular use of facial expression recognition is automatic feedback capture from customers upon reacting to a particular product. Effective recognition technology is in high demand by the common users of today's gadgets and technologies. Facial expression recognition technique is broadly classified into two techniques: Feature based techniques and Model based techniques. The key contribution of this article is that we have analyzed latest state of the art techniques in Feature based techniques and Model based techniques. These techniques are analyzed using various standard public face databases: GEMEP-FERA, BU-3DFE, CK+, Bosphorous, MMI, JAFFE, LFW, FERET, CMU-PIE, Georgia tech, AR, eNTERFACE 05 and FRGC. From our analysis we found that for Feature based Curvelet approach performed on FRGCv2 database gave an excellent 97.83% recognition rate and Model based textured 3D video technique performed on BU-4DFE database gave an excellent 94.34 % recognition rate.
{"title":"Facial expression recognition using feature based techniques and model based techniques: A survey","authors":"Bishwas Mishra, S. Fernandes, K. Abhishek, A. Alva, Chaithra Shetty, Chandan V. Ajila, Dhanush Shetty, Harshitha A. Rao, P. Shetty","doi":"10.1109/ECS.2015.7124976","DOIUrl":"https://doi.org/10.1109/ECS.2015.7124976","url":null,"abstract":"Facial expression is a way of non-verbal communication. A person depicts his/her feelings through facial expressions. In computer systems facial expressions help in verification, identification and authentication. One popular use of facial expression recognition is automatic feedback capture from customers upon reacting to a particular product. Effective recognition technology is in high demand by the common users of today's gadgets and technologies. Facial expression recognition technique is broadly classified into two techniques: Feature based techniques and Model based techniques. The key contribution of this article is that we have analyzed latest state of the art techniques in Feature based techniques and Model based techniques. These techniques are analyzed using various standard public face databases: GEMEP-FERA, BU-3DFE, CK+, Bosphorous, MMI, JAFFE, LFW, FERET, CMU-PIE, Georgia tech, AR, eNTERFACE 05 and FRGC. From our analysis we found that for Feature based Curvelet approach performed on FRGCv2 database gave an excellent 97.83% recognition rate and Model based textured 3D video technique performed on BU-4DFE database gave an excellent 94.34 % recognition rate.","PeriodicalId":202856,"journal":{"name":"2015 2nd International Conference on Electronics and Communication Systems (ICECS)","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123167572","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-06-18DOI: 10.1109/ECS.2015.7124948
Sourya Roy, Arijit Mallick, Sheli Sinha Chowdhury, Sangita Roy
Cuckoo search algorithm (CS) is one of the most efficient optimization techniques developed so far. Several attempts have been made in past in order to improve the efficiency of CSO algorithm. In this paper we have tried to exploit the fundamental step length distribution function of the CS algorithm in order to increase its efficiency. Cuckoo search is a metaheuristic optimization technique. In place of conventional Levy distribution, Gamma distribution has been used. We will represent the increased efficiency of the Gamma distribution aided CSO algorithm in the following paper.
{"title":"A novel approach on Cuckoo search algorithm using Gamma distribution","authors":"Sourya Roy, Arijit Mallick, Sheli Sinha Chowdhury, Sangita Roy","doi":"10.1109/ECS.2015.7124948","DOIUrl":"https://doi.org/10.1109/ECS.2015.7124948","url":null,"abstract":"Cuckoo search algorithm (CS) is one of the most efficient optimization techniques developed so far. Several attempts have been made in past in order to improve the efficiency of CSO algorithm. In this paper we have tried to exploit the fundamental step length distribution function of the CS algorithm in order to increase its efficiency. Cuckoo search is a metaheuristic optimization technique. In place of conventional Levy distribution, Gamma distribution has been used. We will represent the increased efficiency of the Gamma distribution aided CSO algorithm in the following paper.","PeriodicalId":202856,"journal":{"name":"2015 2nd International Conference on Electronics and Communication Systems (ICECS)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125062719","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-06-18DOI: 10.1109/ECS.2015.7124766
C. Geetha, C. Puttamadappa
Cryptography is a technique for secret communication where as obscuring the secret communication using for different data is Steganography. The secret communication is carried through many sources like image, audio & video files. Our work is mainly proposing data hiding by embedding the message of interest using geometric style of cryptographic algorithm, thus providing high security. Wavelet and curvelet transform algorithms are used to perform preprocessing of images. Even if the image carrying embedded data i.e., Stego image undergoes a reverse operation and data cannot be extracted if the receiver is unaware of the exact coordinates of the geometric shape. Hence retrieving secret image for an attacker becomes a hard task. Our Experimental results are verified for both the properties of Cryptography and Steganography it may be applicable for kind of multimedia applications.
{"title":"Enhanced stego-crypto techniques of data hiding through geometrical figures in an image","authors":"C. Geetha, C. Puttamadappa","doi":"10.1109/ECS.2015.7124766","DOIUrl":"https://doi.org/10.1109/ECS.2015.7124766","url":null,"abstract":"Cryptography is a technique for secret communication where as obscuring the secret communication using for different data is Steganography. The secret communication is carried through many sources like image, audio & video files. Our work is mainly proposing data hiding by embedding the message of interest using geometric style of cryptographic algorithm, thus providing high security. Wavelet and curvelet transform algorithms are used to perform preprocessing of images. Even if the image carrying embedded data i.e., Stego image undergoes a reverse operation and data cannot be extracted if the receiver is unaware of the exact coordinates of the geometric shape. Hence retrieving secret image for an attacker becomes a hard task. Our Experimental results are verified for both the properties of Cryptography and Steganography it may be applicable for kind of multimedia applications.","PeriodicalId":202856,"journal":{"name":"2015 2nd International Conference on Electronics and Communication Systems (ICECS)","volume":"125 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125074149","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-06-18DOI: 10.1109/ECS.2015.7124876
O. James
In compressed sensing, the l1-constrained minimal singular value (l1-CMSV) of an encoder is used for analyzing (theoretically) the robustness of decoders against noise. In this paper, we show that for random encoders, the square of the l1-CMSV (S-CMSV) is a random variable. And, for the Gaussian encoders, the S-CMSV admits a simple, closed-form probability and a cumulative distribution functions. We illustrate the benefits of these distributions for analyzing the robustness of various decoders. In particular, we interpret the existing theoretical robustness results of the decoders such as the basis pursuit in terms of the maximum possible undersampling.
{"title":"On the PDF of the square of constrained minimal singular value for robust signal recovery analysis","authors":"O. James","doi":"10.1109/ECS.2015.7124876","DOIUrl":"https://doi.org/10.1109/ECS.2015.7124876","url":null,"abstract":"In compressed sensing, the l1-constrained minimal singular value (l1-CMSV) of an encoder is used for analyzing (theoretically) the robustness of decoders against noise. In this paper, we show that for random encoders, the square of the l1-CMSV (S-CMSV) is a random variable. And, for the Gaussian encoders, the S-CMSV admits a simple, closed-form probability and a cumulative distribution functions. We illustrate the benefits of these distributions for analyzing the robustness of various decoders. In particular, we interpret the existing theoretical robustness results of the decoders such as the basis pursuit in terms of the maximum possible undersampling.","PeriodicalId":202856,"journal":{"name":"2015 2nd International Conference on Electronics and Communication Systems (ICECS)","volume":"69 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125971338","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-06-18DOI: 10.1109/ECS.2015.7124778
M. Narayanan, C. Arun
There is a wide-ranging use of Peer-to-Peer (P2P) computing and applications in majority of the key areas of Engineering and Technology. Devoid of any centralized server, they can share their content since peers are linked with each other. This is the reason why P2P computing gives enhanced communication among peers. It is essential for the video server to maintain the data content link in cache memory so the cache memory sizes will be enlarged to a definite level and also the cache needs to be securely sustained by each and every peers. By utilizing the Machine Learning method, the proposed method centers its concentration on classifying the video server depending on seasonal and non seasonal popularity. Two supervised Machine Learning algorithms are utilized in this paper and are explained as follows. The Case-Based Reasoning algorithm is utilized in order to sort out well-liked videos and the Averaged One-Dependence Estimators (AODE) algorithm is utilized to sort out video server into seasonal and non-seasonal. The first algorithm is based on Retrieve, Reuse, Revise and Retain methods and the latter algorithm sorts out the video server into seasonal and non-seasonal based video servers. The work simulated by Java programming language.
{"title":"Categorize the video server in P2P networks based on seasonal and normal popularity videos using machine learning approach","authors":"M. Narayanan, C. Arun","doi":"10.1109/ECS.2015.7124778","DOIUrl":"https://doi.org/10.1109/ECS.2015.7124778","url":null,"abstract":"There is a wide-ranging use of Peer-to-Peer (P2P) computing and applications in majority of the key areas of Engineering and Technology. Devoid of any centralized server, they can share their content since peers are linked with each other. This is the reason why P2P computing gives enhanced communication among peers. It is essential for the video server to maintain the data content link in cache memory so the cache memory sizes will be enlarged to a definite level and also the cache needs to be securely sustained by each and every peers. By utilizing the Machine Learning method, the proposed method centers its concentration on classifying the video server depending on seasonal and non seasonal popularity. Two supervised Machine Learning algorithms are utilized in this paper and are explained as follows. The Case-Based Reasoning algorithm is utilized in order to sort out well-liked videos and the Averaged One-Dependence Estimators (AODE) algorithm is utilized to sort out video server into seasonal and non-seasonal. The first algorithm is based on Retrieve, Reuse, Revise and Retain methods and the latter algorithm sorts out the video server into seasonal and non-seasonal based video servers. The work simulated by Java programming language.","PeriodicalId":202856,"journal":{"name":"2015 2nd International Conference on Electronics and Communication Systems (ICECS)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123336115","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-06-18DOI: 10.1109/ECS.2015.7124924
K. Deepak, B. Malarkodi, K. Sagar
This paper introduces a new pilot pattern which is obtained by using NRZ Encoder. By using the proposed NRZ Encoded Pilots for Semi Blind Channel estimation (in which the channel response is obtained by interpolating the subsequent channel estimations) in MIMO-OFDM, reduces the Mean Square Error for moderately varying channels and reduces complexity. This method perfectly utilizes the bandwidth and improves the system performance by accurate channel estimation compared to pilot based channel estimation.
{"title":"NRZ encoded pilots for Semi Blind Channel estimation","authors":"K. Deepak, B. Malarkodi, K. Sagar","doi":"10.1109/ECS.2015.7124924","DOIUrl":"https://doi.org/10.1109/ECS.2015.7124924","url":null,"abstract":"This paper introduces a new pilot pattern which is obtained by using NRZ Encoder. By using the proposed NRZ Encoded Pilots for Semi Blind Channel estimation (in which the channel response is obtained by interpolating the subsequent channel estimations) in MIMO-OFDM, reduces the Mean Square Error for moderately varying channels and reduces complexity. This method perfectly utilizes the bandwidth and improves the system performance by accurate channel estimation compared to pilot based channel estimation.","PeriodicalId":202856,"journal":{"name":"2015 2nd International Conference on Electronics and Communication Systems (ICECS)","volume":"379 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123710068","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-06-18DOI: 10.1109/ECS.2015.7124879
G. P. Sarmila, N. Gnanambigai, P. Dinadayalan
The cloud computing is an emerging paradigm over the internet which provides applications and services based on the concept of abstraction and virtualization for a fraction of the cost. The number of cloud user increases day by day to utilize the available resources. Most of the cloud applications are run at remote nodes where many clients may request for the server at a time. This causes overloading in server which results in fault. Load balancing is the networking technique that distributes load to the nodes to optimize resource utilization, throughput, response time and overload. The need of load balancing increases with increase in the demand for computing resources. Fault-tolerance is the ability of system to continue to work even in the presence of fault. This is a critical issue to be addressed to ensure reliability and availability in cloud computing. By effectively balancing the incoming load, fault tolerance can be achieved in cloud. This paper aims to compare the efficient load balancing algorithms that are fault tolerant.
{"title":"Survey on fault tolerant — Load balancing algorithmsin cloud computing","authors":"G. P. Sarmila, N. Gnanambigai, P. Dinadayalan","doi":"10.1109/ECS.2015.7124879","DOIUrl":"https://doi.org/10.1109/ECS.2015.7124879","url":null,"abstract":"The cloud computing is an emerging paradigm over the internet which provides applications and services based on the concept of abstraction and virtualization for a fraction of the cost. The number of cloud user increases day by day to utilize the available resources. Most of the cloud applications are run at remote nodes where many clients may request for the server at a time. This causes overloading in server which results in fault. Load balancing is the networking technique that distributes load to the nodes to optimize resource utilization, throughput, response time and overload. The need of load balancing increases with increase in the demand for computing resources. Fault-tolerance is the ability of system to continue to work even in the presence of fault. This is a critical issue to be addressed to ensure reliability and availability in cloud computing. By effectively balancing the incoming load, fault tolerance can be achieved in cloud. This paper aims to compare the efficient load balancing algorithms that are fault tolerant.","PeriodicalId":202856,"journal":{"name":"2015 2nd International Conference on Electronics and Communication Systems (ICECS)","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116459929","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-06-18DOI: 10.1109/ECS.2015.7124945
A. Rani, R. Rajalaxmi
Feature selection is selecting a subset of optimal features. Feature selection is being used in high dimensional data reduction and it is being used in several applications like medical, image processing, text mining, etc. Several methods were introduced for unsupervised feature selection. Among those methods some are based on filter approach and some are based on wrapper approach. In the existing work, unsupervised feature selection methods using Genetic Algorithm, Particle Swarm Optimization with Relative Reduct, Quick Reduct and Ant Colony Optimization have been introduced. These methods yield better performance for unsupervised feature selection. In this paper we proposed a novel method to select subset of features from unlabeled data using binary bat algorithm with sum of squared error as the fitness function. The proposed method is then tested with various classification algorithms like decision tree, multilayer perceptron, support vector machine and clustering quality measures like sum of squared error. The results show that our proposed method gives more accuracy when compared with other optimization algorithm.
{"title":"Unsupervised feature selection using binary bat algorithm","authors":"A. Rani, R. Rajalaxmi","doi":"10.1109/ECS.2015.7124945","DOIUrl":"https://doi.org/10.1109/ECS.2015.7124945","url":null,"abstract":"Feature selection is selecting a subset of optimal features. Feature selection is being used in high dimensional data reduction and it is being used in several applications like medical, image processing, text mining, etc. Several methods were introduced for unsupervised feature selection. Among those methods some are based on filter approach and some are based on wrapper approach. In the existing work, unsupervised feature selection methods using Genetic Algorithm, Particle Swarm Optimization with Relative Reduct, Quick Reduct and Ant Colony Optimization have been introduced. These methods yield better performance for unsupervised feature selection. In this paper we proposed a novel method to select subset of features from unlabeled data using binary bat algorithm with sum of squared error as the fitness function. The proposed method is then tested with various classification algorithms like decision tree, multilayer perceptron, support vector machine and clustering quality measures like sum of squared error. The results show that our proposed method gives more accuracy when compared with other optimization algorithm.","PeriodicalId":202856,"journal":{"name":"2015 2nd International Conference on Electronics and Communication Systems (ICECS)","volume":"29 5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122944169","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-06-18DOI: 10.1109/ECS.2015.7124892
D. Poornima, S. Vijayashaarathi
Video transmission over the heterogeneous networks faces many challenges due to available bandwidth, link delay, frame lost, throughput, reliability, network congestion. In video streaming it is important that the video stream must reach the users within allocated time and also without errors in video frames which leads to packet loss. Hence to avoid the packet loss and to enhance the Packet Delivery Ratio(PDR) and Throughput of the networks a modified Forward Error Correction mechanism was proposed by considering the feedback information(frame count, buffer status, round trip time(RTT)). Simulation results compares the performance in terms of packet delivery ratio(PDR), throughput and handover delay under various video packet rate and packet intervals.
{"title":"Streaming high definition video over heterogeneous wireless networks(HWN)","authors":"D. Poornima, S. Vijayashaarathi","doi":"10.1109/ECS.2015.7124892","DOIUrl":"https://doi.org/10.1109/ECS.2015.7124892","url":null,"abstract":"Video transmission over the heterogeneous networks faces many challenges due to available bandwidth, link delay, frame lost, throughput, reliability, network congestion. In video streaming it is important that the video stream must reach the users within allocated time and also without errors in video frames which leads to packet loss. Hence to avoid the packet loss and to enhance the Packet Delivery Ratio(PDR) and Throughput of the networks a modified Forward Error Correction mechanism was proposed by considering the feedback information(frame count, buffer status, round trip time(RTT)). Simulation results compares the performance in terms of packet delivery ratio(PDR), throughput and handover delay under various video packet rate and packet intervals.","PeriodicalId":202856,"journal":{"name":"2015 2nd International Conference on Electronics and Communication Systems (ICECS)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129301957","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}