Until now if you want to implement High Availability and Disaster Recovery for our business critical applications we would have to invest in Geo-Cluster Technologies which is absolutely not affordable for Small and medium business customers. Now the arrival of Windows Server 2012 brings with its perfectly acceptable disaster recovery solution for business applications which are running on hyper-V [1] Virtual Machines. Hyper-V Replica enables Hyper-V hosts or clusters to enable distance replication of running VMs to remote Hyper-V hosts over a standard IP WAN connection. It provides a very cost-effective disaster recovery solution in the event of a primary data center outage. This paper proposes a new idea of combining Hyper-V Replica and Power Shell 3.0 to automate the Disaster Recovery process in a cost effective and secured.
到目前为止,如果你想为我们的关键业务应用程序实现高可用性和灾难恢复,我们将不得不投资于地理集群技术,这对于中小型企业客户来说绝对是负担不起的。现在,Windows Server 2012的到来为运行在hyper-V[1]虚拟机上的业务应用程序带来了完全可以接受的灾难恢复解决方案。Hyper-V Replica允许Hyper-V主机或集群通过标准IP WAN连接,将运行中的虚拟机远程复制到远端Hyper-V主机。在主数据中心中断的情况下,它提供了一种非常经济有效的灾难恢复解决方案。本文提出了一种将Hyper-V Replica和Power Shell 3.0相结合的新思路,以一种经济有效且安全的方式实现灾难恢复过程的自动化。
{"title":"Automated Secured Disaster Recovery with Hyper-V Replica and PowerShell","authors":"G. Jayaseelan, P. Charles","doi":"10.1109/WCCCT.2014.60","DOIUrl":"https://doi.org/10.1109/WCCCT.2014.60","url":null,"abstract":"Until now if you want to implement High Availability and Disaster Recovery for our business critical applications we would have to invest in Geo-Cluster Technologies which is absolutely not affordable for Small and medium business customers. Now the arrival of Windows Server 2012 brings with its perfectly acceptable disaster recovery solution for business applications which are running on hyper-V [1] Virtual Machines. Hyper-V Replica enables Hyper-V hosts or clusters to enable distance replication of running VMs to remote Hyper-V hosts over a standard IP WAN connection. It provides a very cost-effective disaster recovery solution in the event of a primary data center outage. This paper proposes a new idea of combining Hyper-V Replica and Power Shell 3.0 to automate the Disaster Recovery process in a cost effective and secured.","PeriodicalId":421793,"journal":{"name":"2014 World Congress on Computing and Communication Technologies","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115106518","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
World Wide Web is a massive repository of web pages and links. It provides information about vast area for the Internet users. There is tremendous growth and development in internet. Users' accesses are documented in web logs. Web usage mining is application of mining techniques in logs. Since due to tremendous usage, the log files are growing at a faster rate and the size is becoming huge. Preprocessing plays a vital role in efficient mining process as Log data is normally noisy and indistinct. Reconstruction of sessions and paths are completed by appending missing pages in preprocessing. Additionally, the transactions which illustrate the behavior of users are constructed exactly in preprocessing by calculating the Reference Lengths of user access by means of byte rate. Using Web clustering several types of objects can be clustered into different groups for various purposes. By using the theory of distribution in Dempster-Shafer's theory, the belief function similarity measure in this algorithm adds to the clustering task the ability to capture the uncertainty among Web user's navigation performance. This paper experiments about the accomplishment of preprocessing and clustering of web log. The experimental result shows the considerable performance of the proposed algorithm.
{"title":"A New Clustering and Preprocessing for Web Log Mining","authors":"B. Maheswari, Dr. P. Sumathi","doi":"10.1109/WCCCT.2014.67","DOIUrl":"https://doi.org/10.1109/WCCCT.2014.67","url":null,"abstract":"World Wide Web is a massive repository of web pages and links. It provides information about vast area for the Internet users. There is tremendous growth and development in internet. Users' accesses are documented in web logs. Web usage mining is application of mining techniques in logs. Since due to tremendous usage, the log files are growing at a faster rate and the size is becoming huge. Preprocessing plays a vital role in efficient mining process as Log data is normally noisy and indistinct. Reconstruction of sessions and paths are completed by appending missing pages in preprocessing. Additionally, the transactions which illustrate the behavior of users are constructed exactly in preprocessing by calculating the Reference Lengths of user access by means of byte rate. Using Web clustering several types of objects can be clustered into different groups for various purposes. By using the theory of distribution in Dempster-Shafer's theory, the belief function similarity measure in this algorithm adds to the clustering task the ability to capture the uncertainty among Web user's navigation performance. This paper experiments about the accomplishment of preprocessing and clustering of web log. The experimental result shows the considerable performance of the proposed algorithm.","PeriodicalId":421793,"journal":{"name":"2014 World Congress on Computing and Communication Technologies","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132487927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Information Retrieval is a process of finding the documents in a collection based on a specific topic. The information need is expressed by the user as a query. Documents that satisfy the given query in the judgment of the user are said to be relevant. The documents that are not of the given topic are said to be non-relevant. An IR engine may use the query to classify the documents in a collection, returning to the user a subset of documents that satisfy some classification criterion. There are several search engines to find information in the given repositories containing large amounts of unstructured form of text data. However, the task of ad hoc information retrieval is, finding documents within a corpus like Bible, that are relevant to the user remains a hard challenge. Sometimes the relevant documents may not contain the specified keyword. The lack of the given term in a document does not necessarily mean that the document is not a relevant. Because more than one terms can be semantically similar although they are lexicographically different. In this paper a new algorithm called "Semantic based Boolean Information Retrieval" (SBIR) is proposed to retrieve the documents with semantically similar terms to enhance the performance of Boolean Information Model by improving the recall and precision.
{"title":"An Approach to Improve Precision and Recall for Ad-hoc Information Retrieval Using SBIR Algorithm","authors":"R. T. Selvi, E. Raj","doi":"10.1109/WCCCT.2014.68","DOIUrl":"https://doi.org/10.1109/WCCCT.2014.68","url":null,"abstract":"Information Retrieval is a process of finding the documents in a collection based on a specific topic. The information need is expressed by the user as a query. Documents that satisfy the given query in the judgment of the user are said to be relevant. The documents that are not of the given topic are said to be non-relevant. An IR engine may use the query to classify the documents in a collection, returning to the user a subset of documents that satisfy some classification criterion. There are several search engines to find information in the given repositories containing large amounts of unstructured form of text data. However, the task of ad hoc information retrieval is, finding documents within a corpus like Bible, that are relevant to the user remains a hard challenge. Sometimes the relevant documents may not contain the specified keyword. The lack of the given term in a document does not necessarily mean that the document is not a relevant. Because more than one terms can be semantically similar although they are lexicographically different. In this paper a new algorithm called \"Semantic based Boolean Information Retrieval\" (SBIR) is proposed to retrieve the documents with semantically similar terms to enhance the performance of Boolean Information Model by improving the recall and precision.","PeriodicalId":421793,"journal":{"name":"2014 World Congress on Computing and Communication Technologies","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115325418","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
MANET is a collection of wireless nodes with high mobility ratio. Construction of path is mandatory by the source node to its destination in-order to communicate. As the nodes move very fast, the path which had been constructed cannot persist. A path may or may not exist even immediate to its construction. Though many mobility models and routing protocols are in existence, finding such path is still a challenge for the mobile nodes in MANET environment. Frequent path failures are not appreciated for certain application as the communication is important and emergency. In this research work an algorithm named "Time Account Based Path Stabilizer (TABPS)" is used to improve the stability of the constructed path between a pair of source and destination.
{"title":"Time Account Based Path Stabilization in MANET","authors":"T. Manimegalai, C. Jayakumar","doi":"10.1109/WCCCT.2014.46","DOIUrl":"https://doi.org/10.1109/WCCCT.2014.46","url":null,"abstract":"MANET is a collection of wireless nodes with high mobility ratio. Construction of path is mandatory by the source node to its destination in-order to communicate. As the nodes move very fast, the path which had been constructed cannot persist. A path may or may not exist even immediate to its construction. Though many mobility models and routing protocols are in existence, finding such path is still a challenge for the mobile nodes in MANET environment. Frequent path failures are not appreciated for certain application as the communication is important and emergency. In this research work an algorithm named \"Time Account Based Path Stabilizer (TABPS)\" is used to improve the stability of the constructed path between a pair of source and destination.","PeriodicalId":421793,"journal":{"name":"2014 World Congress on Computing and Communication Technologies","volume":"84 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122942622","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Breast cancer is one of the most common cancers worldwide. In developed countries, among one in eight women develop breast cancer at some stage of their life. Early diagnosis of breast cancer plays a very important role in treatment of the disease. With the goal of identifying genes that are more correlated with the prognosis of breast cancer, we use data mining techniques to study the gene expression values of breast cancer patients with known clinical outcome. K-means clustering is used to compare the result based on test data. As a result, a set of genes are identified that are potential bio marks for breast cancer prognosis which can categorize the patients based on the certain attributes. A comparison is made on gene expression levels that are discovered with gene subsets identified from similar studies using clustering techniques.
{"title":"Using K-Means Clustering Technique to Study of Breast Cancer","authors":"R. Radha, P. Rajendiran","doi":"10.1109/WCCCT.2014.64","DOIUrl":"https://doi.org/10.1109/WCCCT.2014.64","url":null,"abstract":"Breast cancer is one of the most common cancers worldwide. In developed countries, among one in eight women develop breast cancer at some stage of their life. Early diagnosis of breast cancer plays a very important role in treatment of the disease. With the goal of identifying genes that are more correlated with the prognosis of breast cancer, we use data mining techniques to study the gene expression values of breast cancer patients with known clinical outcome. K-means clustering is used to compare the result based on test data. As a result, a set of genes are identified that are potential bio marks for breast cancer prognosis which can categorize the patients based on the certain attributes. A comparison is made on gene expression levels that are discovered with gene subsets identified from similar studies using clustering techniques.","PeriodicalId":421793,"journal":{"name":"2014 World Congress on Computing and Communication Technologies","volume":"144 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128879590","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A Mobile Ad-Hoc Network (MANET) is a collection of wireless mobile nodes forming a temporary network without using any centralized access point or administration. MANET protocols have to face high challenges due to dynamically changing topologies, low transmission power and asymmetric links of network. An attempt has been made to compare the performance of two On-demand reactive routing protocols namely AODV and DSR which works on gateway discovery algorithms and a proactive routing protocol namely DSDV which works on an algorithm to constantly update network topology information available to all nodes for MANETs on different scenarios. In this paper comparison is made on the basis of performance metrics such as throughput, packet loss and end-to-end delay, and the simulator used is NS-2 in Ubuntu operating system (Linux). The simulations are carried out by varying the packet size, number of connecting nodes at a time and pause time and the results are analyzed.
{"title":"Comparative Study of Proactive and Reactive AdHoc Routing Protocols Using Ns2","authors":"S. Vanthana, V. Prakash","doi":"10.1109/WCCCT.2014.40","DOIUrl":"https://doi.org/10.1109/WCCCT.2014.40","url":null,"abstract":"A Mobile Ad-Hoc Network (MANET) is a collection of wireless mobile nodes forming a temporary network without using any centralized access point or administration. MANET protocols have to face high challenges due to dynamically changing topologies, low transmission power and asymmetric links of network. An attempt has been made to compare the performance of two On-demand reactive routing protocols namely AODV and DSR which works on gateway discovery algorithms and a proactive routing protocol namely DSDV which works on an algorithm to constantly update network topology information available to all nodes for MANETs on different scenarios. In this paper comparison is made on the basis of performance metrics such as throughput, packet loss and end-to-end delay, and the simulator used is NS-2 in Ubuntu operating system (Linux). The simulations are carried out by varying the packet size, number of connecting nodes at a time and pause time and the results are analyzed.","PeriodicalId":421793,"journal":{"name":"2014 World Congress on Computing and Communication Technologies","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125868935","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Retinal images play a vital role in most of the applications like ocular fundus operations and human recognition. Also, it is used to detect the diabetes in early stages by evaluating all the retinal blood vessels together. The detection of blood vessels from the retinal images is generally a slow process. In this paper, a novel algorithm called Contourlet Transform is proposed to detect the blood vessels efficiently. The proposed Contourlet Transform is the extension of wavelet transform used to enhance the retinal image then the image is utilized for the segmentation part. The existing curvelet transform has disadvantages that is directional specificity of the image is less owing to that the effectiveness is poor. The directionality features of the multistructure elements technique construct it as an effectual tool in edge detection. Therefore, morphology operators by means of multistructure elements are given to the enhanced image in order to locate the retinal image ridges. Later, morphological operators by reconstruction eradicate the ridges not related to the vessel tree as trying to protect the thin vessels that are unaffected. This approach uses multistructure elements in order to improve the performance of morphological operators by reconstruction. An improved Ostu thresholding method is combined with Strongly Connected Component Analysis (SCCA) which indicates the remained ridges pertaining to vessels. The experimental results show the proposed method obtains 96% accuracy in detection of blood vessels and is compared with other existing approaches.
{"title":"Retinal Image Analysis Using Contourlet Transform and Multistructure Elements Morphology by Reconstruction","authors":"D. Karthika, A. Marimuthu","doi":"10.1109/WCCCT.2014.15","DOIUrl":"https://doi.org/10.1109/WCCCT.2014.15","url":null,"abstract":"Retinal images play a vital role in most of the applications like ocular fundus operations and human recognition. Also, it is used to detect the diabetes in early stages by evaluating all the retinal blood vessels together. The detection of blood vessels from the retinal images is generally a slow process. In this paper, a novel algorithm called Contourlet Transform is proposed to detect the blood vessels efficiently. The proposed Contourlet Transform is the extension of wavelet transform used to enhance the retinal image then the image is utilized for the segmentation part. The existing curvelet transform has disadvantages that is directional specificity of the image is less owing to that the effectiveness is poor. The directionality features of the multistructure elements technique construct it as an effectual tool in edge detection. Therefore, morphology operators by means of multistructure elements are given to the enhanced image in order to locate the retinal image ridges. Later, morphological operators by reconstruction eradicate the ridges not related to the vessel tree as trying to protect the thin vessels that are unaffected. This approach uses multistructure elements in order to improve the performance of morphological operators by reconstruction. An improved Ostu thresholding method is combined with Strongly Connected Component Analysis (SCCA) which indicates the remained ridges pertaining to vessels. The experimental results show the proposed method obtains 96% accuracy in detection of blood vessels and is compared with other existing approaches.","PeriodicalId":421793,"journal":{"name":"2014 World Congress on Computing and Communication Technologies","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115094900","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Jamesmanoharan, S. Ganesh, M. L. P. Felciah, A. K. Shafreenbanu
Now days in higher learning program, the academic community facing some issues regarding monitor and analyzing the progress of student's academic performance. In the real world, predicting the performance of the students is a challenging task. Currently they are using cluster analysis for analyzing the students' results and using statistical algorithms to segregate their marks based on their performance. But it is not much effective, so we additionally added the k-mean clustering algorithm combined with deterministic model to analyze and monitor the student's results and their performance. By this k-mean clustering we can get more efficiency on monitoring the progress of academic performance of students in higher Institution to provide accurate results in a short period of time. In this paper, we applied the methodology to find out the various interesting pattern by taking the student test scores.
{"title":"Discovering Students' Academic Performance Based on GPA Using K-Means Clustering Algorithm","authors":"J. Jamesmanoharan, S. Ganesh, M. L. P. Felciah, A. K. Shafreenbanu","doi":"10.1109/WCCCT.2014.75","DOIUrl":"https://doi.org/10.1109/WCCCT.2014.75","url":null,"abstract":"Now days in higher learning program, the academic community facing some issues regarding monitor and analyzing the progress of student's academic performance. In the real world, predicting the performance of the students is a challenging task. Currently they are using cluster analysis for analyzing the students' results and using statistical algorithms to segregate their marks based on their performance. But it is not much effective, so we additionally added the k-mean clustering algorithm combined with deterministic model to analyze and monitor the student's results and their performance. By this k-mean clustering we can get more efficiency on monitoring the progress of academic performance of students in higher Institution to provide accurate results in a short period of time. In this paper, we applied the methodology to find out the various interesting pattern by taking the student test scores.","PeriodicalId":421793,"journal":{"name":"2014 World Congress on Computing and Communication Technologies","volume":"311 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123219577","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Numerous functionally similar services are evolving day by day. Selecting the service which matches exactly with the requirements of the consumer is a tedious task. The QoS-based Service Selection Problem (SSP) is a process of allocating a QoS based exterior web service component to each task of the workflow that describes a composite web service. Hence, the aggregate QoS of the composite web service is the best. It is a planning problem by its nature. This paper provides the brief overview of the heuristic based Service Selection Algorithm (LASA-HEU) for the MMKP form of reliability enforced SSP. This paper also compares the proposed LASA-HEU with the existing heuristic based SSA and proved that the proposed LASA-HEU performs better than the existing heuristic based SSA based on the reliability.
{"title":"LASA-HEU: Heuristic Approach for Service Selection in Composite Web Services","authors":"N. Sasikaladevi, L. Arockiam","doi":"10.1109/WCCCT.2014.73","DOIUrl":"https://doi.org/10.1109/WCCCT.2014.73","url":null,"abstract":"Numerous functionally similar services are evolving day by day. Selecting the service which matches exactly with the requirements of the consumer is a tedious task. The QoS-based Service Selection Problem (SSP) is a process of allocating a QoS based exterior web service component to each task of the workflow that describes a composite web service. Hence, the aggregate QoS of the composite web service is the best. It is a planning problem by its nature. This paper provides the brief overview of the heuristic based Service Selection Algorithm (LASA-HEU) for the MMKP form of reliability enforced SSP. This paper also compares the proposed LASA-HEU with the existing heuristic based SSA and proved that the proposed LASA-HEU performs better than the existing heuristic based SSA based on the reliability.","PeriodicalId":421793,"journal":{"name":"2014 World Congress on Computing and Communication Technologies","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114177383","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Enhanced images have high quality and clarity than original captured images. Computer vision image enhancement (Color conversion and Histogram equalization) is used in different real time applications such as remote sensing, medical image analysis and plant leaves disease detection. Original captured images are RGB images. RGB images are combination of primary colors (Red, Green and Blue). It is difficult to implement the applications because of the range of this color is 0 to 255. Grayscale images have only the range between 0 and 1. So it is easy to implement many applications. Histogram equalization is used to increase the images clarity. Grayscale conversion and histogram equalization is used in plant leaves disease detection.
{"title":"Computer Visionimage Enhancement for Plant Leaves Disease Detection","authors":"K. Thangadurai, K. Padmavathi","doi":"10.1109/WCCCT.2014.39","DOIUrl":"https://doi.org/10.1109/WCCCT.2014.39","url":null,"abstract":"Enhanced images have high quality and clarity than original captured images. Computer vision image enhancement (Color conversion and Histogram equalization) is used in different real time applications such as remote sensing, medical image analysis and plant leaves disease detection. Original captured images are RGB images. RGB images are combination of primary colors (Red, Green and Blue). It is difficult to implement the applications because of the range of this color is 0 to 255. Grayscale images have only the range between 0 and 1. So it is easy to implement many applications. Histogram equalization is used to increase the images clarity. Grayscale conversion and histogram equalization is used in plant leaves disease detection.","PeriodicalId":421793,"journal":{"name":"2014 World Congress on Computing and Communication Technologies","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130408976","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}