Pub Date : 2021-08-19DOI: 10.1109/RIVF51545.2021.9642154
Vinh-Nam Huynh, H. H. Nguyen
The recent rapid development of internet technology and applications leads to the booming of videos uploaded to and shared on the internet. However, some of them may contain impermissible content, especially pornographic videos, for the viewers. This problem raises a vast challenge in video filtering for many input videos. Concerning this matter, we introduce our system in which a key frame extraction method will be applied to the input video at the very first step. Subsequently, the Tensorflow object detection API is in charge of detecting and cropping any existing person in these frames. A Convolutional Neural Network (CNN) model then takes the cropped images and classifies them as pornography or not. The video is finally is marked valid for publishing according if the number of adult frames is below a threshold. Our experiments show that the proposed system can process videos much faster than human do while the accuracy is around 90% which can be meaningful to assist people in the task of video filtering.
{"title":"Fast pornographic video detection using Deep Learning","authors":"Vinh-Nam Huynh, H. H. Nguyen","doi":"10.1109/RIVF51545.2021.9642154","DOIUrl":"https://doi.org/10.1109/RIVF51545.2021.9642154","url":null,"abstract":"The recent rapid development of internet technology and applications leads to the booming of videos uploaded to and shared on the internet. However, some of them may contain impermissible content, especially pornographic videos, for the viewers. This problem raises a vast challenge in video filtering for many input videos. Concerning this matter, we introduce our system in which a key frame extraction method will be applied to the input video at the very first step. Subsequently, the Tensorflow object detection API is in charge of detecting and cropping any existing person in these frames. A Convolutional Neural Network (CNN) model then takes the cropped images and classifies them as pornography or not. The video is finally is marked valid for publishing according if the number of adult frames is below a threshold. Our experiments show that the proposed system can process videos much faster than human do while the accuracy is around 90% which can be meaningful to assist people in the task of video filtering.","PeriodicalId":6860,"journal":{"name":"2021 RIVF International Conference on Computing and Communication Technologies (RIVF)","volume":"12 23","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2021-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91405544","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-19DOI: 10.1109/RIVF51545.2021.9642111
Cao Phan Xuan Qui, Dang Hong Quang, Phan The Duy, Do Thi Thu Hien, V. Pham
With the spread of the number of smart devices in the context of Smart City, Software Defined Networking (SDN) is considered as a vital principle to manage a large-scale heterogeneous network within centralized controller. To deal with cyberattacks against such networks, intrusion detection system (IDS) is built to recognize and alert to the system administrator for further appropriate response. Currently, machine learning-based IDS (ML-IDS) has been explored and is still being developed. However, these systems give a high rate of false alert and are easily deceived by sophisticated attacks such as variants of attacks containing perturbation. Therefore, it is necessary to continuously evaluate and improve these systems by simulating mutation of real-world network attack. Relied on the Generative Discriminative Networks (GANs), we introduce DIGFuPAS, a framework that generates data flow of cyberattacks capable of bypassing ML-IDS. It can generate malicious data streams that mutate from real attack traffic making the IDS undetectable. The generated traffic flow is used to retrain ML-IDS, for improving the robustness of IDS in detecting sophisticated attacks. The experiments are performed and evaluated through 2 criteria: Detection rate (DR) and F1 Score (F1) on the public dataset, named CICIDS2017. DIGFuPAS can be used for continuously pentesting and evaluating IDS’s capability once integrated as an automated sustainability test pipeline for SDN-enabled networks.
{"title":"Strengthening IDS against Evasion Attacks with GAN-based Adversarial Samples in SDN-enabled network","authors":"Cao Phan Xuan Qui, Dang Hong Quang, Phan The Duy, Do Thi Thu Hien, V. Pham","doi":"10.1109/RIVF51545.2021.9642111","DOIUrl":"https://doi.org/10.1109/RIVF51545.2021.9642111","url":null,"abstract":"With the spread of the number of smart devices in the context of Smart City, Software Defined Networking (SDN) is considered as a vital principle to manage a large-scale heterogeneous network within centralized controller. To deal with cyberattacks against such networks, intrusion detection system (IDS) is built to recognize and alert to the system administrator for further appropriate response. Currently, machine learning-based IDS (ML-IDS) has been explored and is still being developed. However, these systems give a high rate of false alert and are easily deceived by sophisticated attacks such as variants of attacks containing perturbation. Therefore, it is necessary to continuously evaluate and improve these systems by simulating mutation of real-world network attack. Relied on the Generative Discriminative Networks (GANs), we introduce DIGFuPAS, a framework that generates data flow of cyberattacks capable of bypassing ML-IDS. It can generate malicious data streams that mutate from real attack traffic making the IDS undetectable. The generated traffic flow is used to retrain ML-IDS, for improving the robustness of IDS in detecting sophisticated attacks. The experiments are performed and evaluated through 2 criteria: Detection rate (DR) and F1 Score (F1) on the public dataset, named CICIDS2017. DIGFuPAS can be used for continuously pentesting and evaluating IDS’s capability once integrated as an automated sustainability test pipeline for SDN-enabled networks.","PeriodicalId":6860,"journal":{"name":"2021 RIVF International Conference on Computing and Communication Technologies (RIVF)","volume":"64 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2021-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83949433","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-19DOI: 10.1109/RIVF51545.2021.9642141
X. Manh, Hai Vu, Xuan Dung Nguyen, Linh Hoang Pham Tu, V. Dao, Phuc Binh Nguyen, M. Nguyen
Z-line is a junction between esophageal and gastric mucosa which is an important landmark in exploring esophageal diseases such as Gastroesophageal Reflux Diseases (GERD). This paper describes an effective interactive segmentation tool for Z-line annotation from Upper Gastrointestinal Endoscopy (UGIE) images. To this end, we propose a method containing of two main steps: firstly, a coarse scheme is designed to roughly segment boundary regions of Z-line. Thanks to recent advances of deep neural networks in biomedical imaging such as U-net segmentation, Z-line annotation is automatically achieved with acceptable results. However, the U-net’s segmentation is not accurate enough due to gastric mucosa complexity. We then propose a fine-tuning scheme, which aims to prune the U-net’s results. The proposed method is based on Binary Partition Tree (BPT) algorithms, which BPT is built-in into a Graphic User Interface. Objective of the proposed framework is to help endoscopy doctors achieve the best segmentation results with lowest efforts of interactions via the GUI. The experiment was setup to evaluate effectiveness of the proposed method by comparing performances of four different segmentation schemes. They are manual segmentation by hand, fully automation by U-net, the interactive segmentation via BPT only, and the proposed scheme (U-net+BPT). The results confirmed that the proposed method converged faster to ideal regions than the other three. It took the lowest time costs and users’ efforts but achieved the best accuracy. The proposed method also suggest a feasible solution for segmenting abnormal regions in UGIE images.
{"title":"Interactive Z-line segmentation tool for Upper Gastrointestinal Endoscopy Images using Binary Partition Tree and U-net","authors":"X. Manh, Hai Vu, Xuan Dung Nguyen, Linh Hoang Pham Tu, V. Dao, Phuc Binh Nguyen, M. Nguyen","doi":"10.1109/RIVF51545.2021.9642141","DOIUrl":"https://doi.org/10.1109/RIVF51545.2021.9642141","url":null,"abstract":"Z-line is a junction between esophageal and gastric mucosa which is an important landmark in exploring esophageal diseases such as Gastroesophageal Reflux Diseases (GERD). This paper describes an effective interactive segmentation tool for Z-line annotation from Upper Gastrointestinal Endoscopy (UGIE) images. To this end, we propose a method containing of two main steps: firstly, a coarse scheme is designed to roughly segment boundary regions of Z-line. Thanks to recent advances of deep neural networks in biomedical imaging such as U-net segmentation, Z-line annotation is automatically achieved with acceptable results. However, the U-net’s segmentation is not accurate enough due to gastric mucosa complexity. We then propose a fine-tuning scheme, which aims to prune the U-net’s results. The proposed method is based on Binary Partition Tree (BPT) algorithms, which BPT is built-in into a Graphic User Interface. Objective of the proposed framework is to help endoscopy doctors achieve the best segmentation results with lowest efforts of interactions via the GUI. The experiment was setup to evaluate effectiveness of the proposed method by comparing performances of four different segmentation schemes. They are manual segmentation by hand, fully automation by U-net, the interactive segmentation via BPT only, and the proposed scheme (U-net+BPT). The results confirmed that the proposed method converged faster to ideal regions than the other three. It took the lowest time costs and users’ efforts but achieved the best accuracy. The proposed method also suggest a feasible solution for segmenting abnormal regions in UGIE images.","PeriodicalId":6860,"journal":{"name":"2021 RIVF International Conference on Computing and Communication Technologies (RIVF)","volume":"49 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2021-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74364310","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-06DOI: 10.1109/RIVF51545.2021.9642151
Trong-Thuan Nguyen, Thuan Q. Nguyen, D. Vo, Vien Nguyen, Ngoc Ho, Nguyen D. Vo, Kiet Van Nguyen, Khang Nguyen
Vietnam is such an attractive tourist destination with its stunning and pristine landscapes and its top-rated unique food and drink. Among thousands of Vietnamese dishes, foreigners and native people are interested in easy-to-eat tastes and easy-to-do recipes, along with reasonable prices, mouthwatering flavors, and popularity. Due to the diversity and almost all the dishes have significant similarities and the lack of quality Vietnamese food datasets, it is hard to implement an auto system to classify Vietnamese food, therefore, make people easier to discover Vietnamese food. This paper introduces a new Vietnamese food dataset named VinaFood21, which consists of 13,950 images corresponding to 21 dishes. We use 10,044 images for model training and 6,682 test images to classify each food in the VinaFood21 dataset and achieved an average accuracy of 74.81% when fine-tuning CNN EfficientNet-B0.
{"title":"VinaFood21: A Novel Dataset for Evaluating Vietnamese Food Recognition","authors":"Trong-Thuan Nguyen, Thuan Q. Nguyen, D. Vo, Vien Nguyen, Ngoc Ho, Nguyen D. Vo, Kiet Van Nguyen, Khang Nguyen","doi":"10.1109/RIVF51545.2021.9642151","DOIUrl":"https://doi.org/10.1109/RIVF51545.2021.9642151","url":null,"abstract":"Vietnam is such an attractive tourist destination with its stunning and pristine landscapes and its top-rated unique food and drink. Among thousands of Vietnamese dishes, foreigners and native people are interested in easy-to-eat tastes and easy-to-do recipes, along with reasonable prices, mouthwatering flavors, and popularity. Due to the diversity and almost all the dishes have significant similarities and the lack of quality Vietnamese food datasets, it is hard to implement an auto system to classify Vietnamese food, therefore, make people easier to discover Vietnamese food. This paper introduces a new Vietnamese food dataset named VinaFood21, which consists of 13,950 images corresponding to 21 dishes. We use 10,044 images for model training and 6,682 test images to classify each food in the VinaFood21 dataset and achieved an average accuracy of 74.81% when fine-tuning CNN EfficientNet-B0.","PeriodicalId":6860,"journal":{"name":"2021 RIVF International Conference on Computing and Communication Technologies (RIVF)","volume":"12 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2021-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74723276","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-06-03DOI: 10.1109/RIVF51545.2021.9642116
Hanh Hong-Phuc Vo, H. Tran, Son T. Luu
Online game forums are popular to most of game players. They use it to communicate and discuss the strategy of the game, or even to make friends. However, game forums also contain abusive and harassment speech, disturbing and threatening players. Therefore, it is necessary to automatically detect and remove cyberbullying comments to keep the game forum clean and friendly. We use the Cyberbullying dataset collected from World of Warcraft (WoW) and League of Legends (LoL) forums and train classification models to automatically detect whether a comment of a player is abusive or not. The result obtains 82.69% of macro F1-score for LoL forum and 83.86% of macro F1-score for WoW forum by the Toxic-BERT model on the Cyberbullying dataset.
{"title":"Automatically Detecting Cyberbullying Comments on Online Game Forums","authors":"Hanh Hong-Phuc Vo, H. Tran, Son T. Luu","doi":"10.1109/RIVF51545.2021.9642116","DOIUrl":"https://doi.org/10.1109/RIVF51545.2021.9642116","url":null,"abstract":"Online game forums are popular to most of game players. They use it to communicate and discuss the strategy of the game, or even to make friends. However, game forums also contain abusive and harassment speech, disturbing and threatening players. Therefore, it is necessary to automatically detect and remove cyberbullying comments to keep the game forum clean and friendly. We use the Cyberbullying dataset collected from World of Warcraft (WoW) and League of Legends (LoL) forums and train classification models to automatically detect whether a comment of a player is abusive or not. The result obtains 82.69% of macro F1-score for LoL forum and 83.86% of macro F1-score for WoW forum by the Toxic-BERT model on the Cyberbullying dataset.","PeriodicalId":6860,"journal":{"name":"2021 RIVF International Conference on Computing and Communication Technologies (RIVF)","volume":"52 1","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2021-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82014066","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-23DOI: 10.1109/RIVF51545.2021.9642125
Nguyen Manh Duc Tuan, Pham Quang Nhat Minh
Fake news detection is an important task for in- creasing the reliability of the information on the internet since fake news is spreading fast on social media and has a negative effect on our society. In this paper, we present a novel method for detecting fake news by fusing multi-modal features derived from textual and visual data. Specifically, we proposed a scaled dot- product attention mechanism to capture the relationship between text features extracted by a pre-trained BERT model and visual features extracted by a pre-trained VGG-19 model. Experimental results showed that our method improved against the current state-of-the-art method on a public Twitter dataset by 3.1% accuracy.
{"title":"Multimodal Fusion with BERT and Attention Mechanism for Fake News Detection","authors":"Nguyen Manh Duc Tuan, Pham Quang Nhat Minh","doi":"10.1109/RIVF51545.2021.9642125","DOIUrl":"https://doi.org/10.1109/RIVF51545.2021.9642125","url":null,"abstract":"Fake news detection is an important task for in- creasing the reliability of the information on the internet since fake news is spreading fast on social media and has a negative effect on our society. In this paper, we present a novel method for detecting fake news by fusing multi-modal features derived from textual and visual data. Specifically, we proposed a scaled dot- product attention mechanism to capture the relationship between text features extracted by a pre-trained BERT model and visual features extracted by a pre-trained VGG-19 model. Experimental results showed that our method improved against the current state-of-the-art method on a public Twitter dataset by 3.1% accuracy.","PeriodicalId":6860,"journal":{"name":"2021 RIVF International Conference on Computing and Communication Technologies (RIVF)","volume":"24 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2021-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76306462","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-10-19DOI: 10.1109/RIVF51545.2021.9642143
Tuan-Vi Tran, Xuan-Thien Pham, Duc-Vu Nguyen, Kiet Van Nguyen, N. Nguyen
Constituency parsing is an important task that gets more attention in natural language processing. In this work, we use a span-based approach for Vietnamese constituency parsing. Our method follows the self-attention encoder architecture and a chart decoder using a CKY-style inference algorithm. We present analyses of the experiment results of the comparison of our empirical method using pre-training models XLM-R and PhoBERT on both Vietnamese datasets VietTreebank and NIIVTB1. The results show that our model with XLM-R archived the significantly F1-score better than other pre-training models, VietTreebank at 81.19% and NIIVTB1 at 85.70%.
{"title":"An Empirical Study for Vietnamese Constituency Parsing with Pre-training","authors":"Tuan-Vi Tran, Xuan-Thien Pham, Duc-Vu Nguyen, Kiet Van Nguyen, N. Nguyen","doi":"10.1109/RIVF51545.2021.9642143","DOIUrl":"https://doi.org/10.1109/RIVF51545.2021.9642143","url":null,"abstract":"Constituency parsing is an important task that gets more attention in natural language processing. In this work, we use a span-based approach for Vietnamese constituency parsing. Our method follows the self-attention encoder architecture and a chart decoder using a CKY-style inference algorithm. We present analyses of the experiment results of the comparison of our empirical method using pre-training models XLM-R and PhoBERT on both Vietnamese datasets VietTreebank and NIIVTB1. The results show that our model with XLM-R archived the significantly F1-score better than other pre-training models, VietTreebank at 81.19% and NIIVTB1 at 85.70%.","PeriodicalId":6860,"journal":{"name":"2021 RIVF International Conference on Computing and Communication Technologies (RIVF)","volume":"52 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2020-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84687850","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/ICCCT2.2014.7066746
V. Krishnaiah, M. Srinivas, G. Narsimha, N. S. Chandra
Data mining technique in the history of medical data found with enormous investigations found that the prediction of heart disease is very important in medical science. In medical history it is observed that the unstructured data as heterogeneous data and it is observed that the data formed with different attributes should be analyzed to predict and provide information for making diagnosis of a heart patient. Various techniques in Data Mining have been applied to predict the heart disease patients. But, the uncertainty in data was not removed with the techniques available in data mining and implemented by various authors. To remove uncertainty of unstructured data, an attempt was made by introducing fuzziness in the measured data. A membership function was designed and incorporated with the measured value to remove uncertainty and fuzzified data was used to predict the heart disease patients.. Further, an attempt was made to classify the patients based on the attributes collected from medical field. Minimum Euclidean distance Fuzzy K-NN classifier was designed to classify the training and testing data belonging to different classes. It was found that Fuzzy K-NN classifier suits well as compared with other classifiers of parametric techniques.
{"title":"Diagnosis of heart disease patients using fuzzy classification technique","authors":"V. Krishnaiah, M. Srinivas, G. Narsimha, N. S. Chandra","doi":"10.1109/ICCCT2.2014.7066746","DOIUrl":"https://doi.org/10.1109/ICCCT2.2014.7066746","url":null,"abstract":"Data mining technique in the history of medical data found with enormous investigations found that the prediction of heart disease is very important in medical science. In medical history it is observed that the unstructured data as heterogeneous data and it is observed that the data formed with different attributes should be analyzed to predict and provide information for making diagnosis of a heart patient. Various techniques in Data Mining have been applied to predict the heart disease patients. But, the uncertainty in data was not removed with the techniques available in data mining and implemented by various authors. To remove uncertainty of unstructured data, an attempt was made by introducing fuzziness in the measured data. A membership function was designed and incorporated with the measured value to remove uncertainty and fuzzified data was used to predict the heart disease patients.. Further, an attempt was made to classify the patients based on the attributes collected from medical field. Minimum Euclidean distance Fuzzy K-NN classifier was designed to classify the training and testing data belonging to different classes. It was found that Fuzzy K-NN classifier suits well as compared with other classifiers of parametric techniques.","PeriodicalId":6860,"journal":{"name":"2021 RIVF International Conference on Computing and Communication Technologies (RIVF)","volume":"26 11 1","pages":"1-7"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77263630","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/ICCCT2.2014.7066720
Mohammed A. Saifullah, M. A. Maluk Mohammed
The number of users and services on Internet are increasing day by day resulting in high traffic and load on the web servers. This is in turn increasing the service time of web requests and degrading the quality of service. A well known solution to this problem is replication of content using cluster of web servers. An efficient server load balancing policy is required to achieve scalability and high performance of the service offered by cluster of web servers. Under dynamic, secure and database driven loads, existing load balancing strategies are suffering from performance degradation. In this paper, we proposed Scalable Load Balancing using Virtualization based on Approximation Algorithm (SLBVA). SLBVA is an estimation strategy as it is challenging to correctly measure the load on each web server of a cluster. SLBVA algorithm is capable of offering guarantees for different client priorities, such as premium customers and default customers. We show that using SLBVA strategy web servers are able to maintain Service Level Agreements (SLA) without the need of a priori over-dimensioning of server resources. This is achieved by taking the real perspective of the service requests using the measurement of arrival rates at that time and judiciously discard some requests from the default clients if the default customers traffic is high. If the arrival rate of premium customers goes beyond the capacity of cluster, we will increase the capacity of cluster using virtualization by utilizing the unused servers from the under-utilized server farms. We analyzed and compared the experimental results of SLBVA algorithm with the results of the very popular load balancing algorithm, Weighted Round Robin (WRR). We show that even though the SLBVA strategy takes a little more server processing resources than WRR, it is capable to render assurances unlike WRR.
{"title":"Scalable load balancing using virtualization based on approximation","authors":"Mohammed A. Saifullah, M. A. Maluk Mohammed","doi":"10.1109/ICCCT2.2014.7066720","DOIUrl":"https://doi.org/10.1109/ICCCT2.2014.7066720","url":null,"abstract":"The number of users and services on Internet are increasing day by day resulting in high traffic and load on the web servers. This is in turn increasing the service time of web requests and degrading the quality of service. A well known solution to this problem is replication of content using cluster of web servers. An efficient server load balancing policy is required to achieve scalability and high performance of the service offered by cluster of web servers. Under dynamic, secure and database driven loads, existing load balancing strategies are suffering from performance degradation. In this paper, we proposed Scalable Load Balancing using Virtualization based on Approximation Algorithm (SLBVA). SLBVA is an estimation strategy as it is challenging to correctly measure the load on each web server of a cluster. SLBVA algorithm is capable of offering guarantees for different client priorities, such as premium customers and default customers. We show that using SLBVA strategy web servers are able to maintain Service Level Agreements (SLA) without the need of a priori over-dimensioning of server resources. This is achieved by taking the real perspective of the service requests using the measurement of arrival rates at that time and judiciously discard some requests from the default clients if the default customers traffic is high. If the arrival rate of premium customers goes beyond the capacity of cluster, we will increase the capacity of cluster using virtualization by utilizing the unused servers from the under-utilized server farms. We analyzed and compared the experimental results of SLBVA algorithm with the results of the very popular load balancing algorithm, Weighted Round Robin (WRR). We show that even though the SLBVA strategy takes a little more server processing resources than WRR, it is capable to render assurances unlike WRR.","PeriodicalId":6860,"journal":{"name":"2021 RIVF International Conference on Computing and Communication Technologies (RIVF)","volume":"53 1","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81268264","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/ICCCT2.2014.7066719
G. Thirumaleswari, Ch. Suneetha, G. C. Bharathi
Securing digital data has become tedious task as the technology is increasing. Existing encryption algorithms such as AES,DES and Blowfish ensure information security but consume lot of time as the security level increases. In this paper, Byte Rotation Encryption Algorithm (BREA) has implemented using parallel processing and multi-core utilization. BREA divides data into fixed size blocks. Each block is processed parallely using random number key. So the set of blocks are executed in parallel by utilizing all the available CPU cores. Finally, from the experimental analysis, it is observed that the proposed BREA algorithm reduces execution time when the number of cores has increased.
{"title":"Byte Rotation Encryption Algorithm through parallel processing and multi-core utilization","authors":"G. Thirumaleswari, Ch. Suneetha, G. C. Bharathi","doi":"10.1109/ICCCT2.2014.7066719","DOIUrl":"https://doi.org/10.1109/ICCCT2.2014.7066719","url":null,"abstract":"Securing digital data has become tedious task as the technology is increasing. Existing encryption algorithms such as AES,DES and Blowfish ensure information security but consume lot of time as the security level increases. In this paper, Byte Rotation Encryption Algorithm (BREA) has implemented using parallel processing and multi-core utilization. BREA divides data into fixed size blocks. Each block is processed parallely using random number key. So the set of blocks are executed in parallel by utilizing all the available CPU cores. Finally, from the experimental analysis, it is observed that the proposed BREA algorithm reduces execution time when the number of cores has increased.","PeriodicalId":6860,"journal":{"name":"2021 RIVF International Conference on Computing and Communication Technologies (RIVF)","volume":"78 1","pages":"1-4"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76886974","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}