Pub Date : 2018-12-01DOI: 10.1109/ICCITECHN.2018.8631953
T. Ahammad, Uzzal Kumar Acharjee, M. Hasan
The rising demands of cloud computing tend to increase the energy consumption. So, a sustainable computing environment is essential for ensuring efficient resource allocation considering the quality of service (QoS). There are many approaches in the literature employing for minimizing energy use in cloud. Predicting workload is one of the most robust and promising tasks of energy-aware cloud computing. This paper presents a service-oriented model for determining future resources requirement by predicting cloud workloads. The model incorporates several key issues alongside with load predictor to establish an energy-effective cloud environment. The workload prediction is accomplished with Multilayer Perceptron (MLP) because of its better prediction quality than the most commonly used approaches. Moreover, an implementation architecture of the proposed model is suggested to achieve the goal of this paper.
{"title":"Energy-Effective Service-Oriented Cloud Resource Allocation Model Based on Workload Prediction","authors":"T. Ahammad, Uzzal Kumar Acharjee, M. Hasan","doi":"10.1109/ICCITECHN.2018.8631953","DOIUrl":"https://doi.org/10.1109/ICCITECHN.2018.8631953","url":null,"abstract":"The rising demands of cloud computing tend to increase the energy consumption. So, a sustainable computing environment is essential for ensuring efficient resource allocation considering the quality of service (QoS). There are many approaches in the literature employing for minimizing energy use in cloud. Predicting workload is one of the most robust and promising tasks of energy-aware cloud computing. This paper presents a service-oriented model for determining future resources requirement by predicting cloud workloads. The model incorporates several key issues alongside with load predictor to establish an energy-effective cloud environment. The workload prediction is accomplished with Multilayer Perceptron (MLP) because of its better prediction quality than the most commonly used approaches. Moreover, an implementation architecture of the proposed model is suggested to achieve the goal of this paper.","PeriodicalId":355984,"journal":{"name":"2018 21st International Conference of Computer and Information Technology (ICCIT)","volume":"29 20","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120857360","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-12-01DOI: 10.1109/ICCITECHN.2018.8631950
R. Sikder, M. Hossain, F. M. R. H. Robi
Text summarization is a process of summarize any text or document. There are many summarization tools for English language. There are also a few works for automated Bengali text or document summarization. The tools are seemed not much appropriate from application point of view. Summarization is categorized in two ways: extractive and abstractive approach. Most of the summarizer methods for Bengali text summarization are extractive. Those proposed methods can't extract whole theme of a text document. Reader can be satisfied about summary if it gives full important information of input document. Our proposed method introduces an enhanced summarization method that can improved the quality of outputs. The proposed method is modeled combining a set mathematical rules and Bengali grammatical rules. This method also solves many problems of extractive summarizer and it also introduces the path of abstractive summarization methods. Although the method has been developed for Bengali language, it is a generic and platform independent approach and can flexible be extended for other languages.
{"title":"A Noble Analytical Text Summarization Technique for Natural Bengali Language","authors":"R. Sikder, M. Hossain, F. M. R. H. Robi","doi":"10.1109/ICCITECHN.2018.8631950","DOIUrl":"https://doi.org/10.1109/ICCITECHN.2018.8631950","url":null,"abstract":"Text summarization is a process of summarize any text or document. There are many summarization tools for English language. There are also a few works for automated Bengali text or document summarization. The tools are seemed not much appropriate from application point of view. Summarization is categorized in two ways: extractive and abstractive approach. Most of the summarizer methods for Bengali text summarization are extractive. Those proposed methods can't extract whole theme of a text document. Reader can be satisfied about summary if it gives full important information of input document. Our proposed method introduces an enhanced summarization method that can improved the quality of outputs. The proposed method is modeled combining a set mathematical rules and Bengali grammatical rules. This method also solves many problems of extractive summarizer and it also introduces the path of abstractive summarization methods. Although the method has been developed for Bengali language, it is a generic and platform independent approach and can flexible be extended for other languages.","PeriodicalId":355984,"journal":{"name":"2018 21st International Conference of Computer and Information Technology (ICCIT)","volume":"707 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122988350","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-12-01DOI: 10.1109/ICCITECHN.2018.8631910
Ashratuz Zavin, Fahim Anzum, S. M. Faisal Rahman, M. Islam, M. Hoque
In this era of industrialization, unanticipated fire accidents at homes, industries and corporate working places have become a foremost concern. As well as damaging valuable properties and equipment of homes and working environments, such fire incidents also take away countless human lives. Although automated fire-detection and alarm system have been implemented, no automated fire exit guidance system has yet been introduced that provides a systematic and intelligent way of evacuation from fire affected place. In this paper, an intelligent automated fire exit guidance system using A* search algorithm has been presented, where along with guiding the affected people through the safest optimal path, the system calculates least crowded and shortest path considering distance, endangered node and crowd distribution mechanism. It also detects if any path is already compromised with fire and dynamically suggests second optimal path to fire exit. Experimental results showed the proposed system executed accurately and effectively by handling distance and safety related challenges to ensure minimum life and resource causalities.
{"title":"Towards Developing an Intelligent Fire Exit Guidance System Using Informed Search Technique","authors":"Ashratuz Zavin, Fahim Anzum, S. M. Faisal Rahman, M. Islam, M. Hoque","doi":"10.1109/ICCITECHN.2018.8631910","DOIUrl":"https://doi.org/10.1109/ICCITECHN.2018.8631910","url":null,"abstract":"In this era of industrialization, unanticipated fire accidents at homes, industries and corporate working places have become a foremost concern. As well as damaging valuable properties and equipment of homes and working environments, such fire incidents also take away countless human lives. Although automated fire-detection and alarm system have been implemented, no automated fire exit guidance system has yet been introduced that provides a systematic and intelligent way of evacuation from fire affected place. In this paper, an intelligent automated fire exit guidance system using A* search algorithm has been presented, where along with guiding the affected people through the safest optimal path, the system calculates least crowded and shortest path considering distance, endangered node and crowd distribution mechanism. It also detects if any path is already compromised with fire and dynamically suggests second optimal path to fire exit. Experimental results showed the proposed system executed accurately and effectively by handling distance and safety related challenges to ensure minimum life and resource causalities.","PeriodicalId":355984,"journal":{"name":"2018 21st International Conference of Computer and Information Technology (ICCIT)","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126086784","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-12-01DOI: 10.1109/ICCITECHN.2018.8631976
Md. Fazla Elahi, H. Nyeem, Md Abdul Wahed
Reversible data hiding (RDH) schemes proved to be a key solution to the effective management and security protection of intensely growing medical images and their associated electronic patient records (EPRs). This paper investigates the potential of a new reversible data hiding (RDH) scheme with pixel-to-block (PTB) mapping to address those challenges. Existing PTB-based schemes are inefficient due to the sub-optimal changes in the repeated pixels, direct and unconditional embedding into their least significant bits or requirement of additional knowledge of look-up table or location map. We used a PTB mapping that creates a block of size $2times 2$ with repetition of an original pixel. These blocks are adjusted with the value of 4-bit data keeping the first (i.e. original) pixel of the block intact and modifying the others with minimum possible changes. Thereby a better embedding rate-distortion performance is obtained and the overflow and underflow problem is effectively tackled without requiring any location map or lookup table. Embedding EPR data into a large set medical images demonstrated that the proposed scheme would offer significantly better embedded image quality with high embedding capacity compared to the prominent PTB-based RDH schemes.
{"title":"Efficient and Lossless Hiding of Electronic Patient Records in Medical Images","authors":"Md. Fazla Elahi, H. Nyeem, Md Abdul Wahed","doi":"10.1109/ICCITECHN.2018.8631976","DOIUrl":"https://doi.org/10.1109/ICCITECHN.2018.8631976","url":null,"abstract":"Reversible data hiding (RDH) schemes proved to be a key solution to the effective management and security protection of intensely growing medical images and their associated electronic patient records (EPRs). This paper investigates the potential of a new reversible data hiding (RDH) scheme with pixel-to-block (PTB) mapping to address those challenges. Existing PTB-based schemes are inefficient due to the sub-optimal changes in the repeated pixels, direct and unconditional embedding into their least significant bits or requirement of additional knowledge of look-up table or location map. We used a PTB mapping that creates a block of size $2times 2$ with repetition of an original pixel. These blocks are adjusted with the value of 4-bit data keeping the first (i.e. original) pixel of the block intact and modifying the others with minimum possible changes. Thereby a better embedding rate-distortion performance is obtained and the overflow and underflow problem is effectively tackled without requiring any location map or lookup table. Embedding EPR data into a large set medical images demonstrated that the proposed scheme would offer significantly better embedded image quality with high embedding capacity compared to the prominent PTB-based RDH schemes.","PeriodicalId":355984,"journal":{"name":"2018 21st International Conference of Computer and Information Technology (ICCIT)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130798911","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-12-01DOI: 10.1109/ICCITECHN.2018.8631909
Amdadul Haque, Nazia Hossain, Hedayetul Islam
Access to uninterrupted electricity to citizens is one of the basic needs in day to day life. It is one of vital index of economic development in developing countries and the economy is driven by energy availability with efficient power management in electricity distribution networks. And the power sector utilities in developing countries are not completely automated and not up to the mark. Due to the industry-friendly government policies, large number of new electrical connections are provided to the industries and to the domestic consumers. In distribution networks, it is a common phenomenon that large number of distribution transformer gets burned when the transformer gets overloaded. These burned transformers increase the maintenance cost of the distribution lines and overall operating cost of the electricity utilities. In this article, we developed Load Management System (LMS) to reduce the distribution transformer burning due to extended load. The LMS uses monthly electricity consumption data and based on regional consumption behavior of the consumers it back calculates the load data - outputs the load status of the distribution transformer. The LMS has been implemented in different power utility of Bangladesh and it reduces the transformer maintenance cost to a great extent.
{"title":"Reduction of Distribution Transformer Burn in Power Sector Utilities","authors":"Amdadul Haque, Nazia Hossain, Hedayetul Islam","doi":"10.1109/ICCITECHN.2018.8631909","DOIUrl":"https://doi.org/10.1109/ICCITECHN.2018.8631909","url":null,"abstract":"Access to uninterrupted electricity to citizens is one of the basic needs in day to day life. It is one of vital index of economic development in developing countries and the economy is driven by energy availability with efficient power management in electricity distribution networks. And the power sector utilities in developing countries are not completely automated and not up to the mark. Due to the industry-friendly government policies, large number of new electrical connections are provided to the industries and to the domestic consumers. In distribution networks, it is a common phenomenon that large number of distribution transformer gets burned when the transformer gets overloaded. These burned transformers increase the maintenance cost of the distribution lines and overall operating cost of the electricity utilities. In this article, we developed Load Management System (LMS) to reduce the distribution transformer burning due to extended load. The LMS uses monthly electricity consumption data and based on regional consumption behavior of the consumers it back calculates the load data - outputs the load status of the distribution transformer. The LMS has been implemented in different power utility of Bangladesh and it reduces the transformer maintenance cost to a great extent.","PeriodicalId":355984,"journal":{"name":"2018 21st International Conference of Computer and Information Technology (ICCIT)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131225866","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-12-01DOI: 10.1109/ICCITECHN.2018.8631974
S. Islam, Mst. Farhana Sarkar, Towhid Hussain, M. Hasan, D. Farid, Swakkhar Shatabda
Development in deep neural networks in particular to natural language processing has motivated researchers to apply these techniques in solving challenging problems like machine translation, automatic grammar checking, etc. In this paper, we address the problem of Bangla sentence correction and auto completion using decoder-encoder based sequence-to-sequence recurrent neural network with long short term memory cells. For this purpose, we have constructed a standard benchmark dataset incorporating mis-arrangement of words, missing words and sentence completion tasks. Based on the dataset we have trained our model and achieved 79% accuracy on the test dataset. We have made all our methods and datasets available for future use of the other researchers from: https://github.com/mrscp/bangla-sentence-correction. An online tool have also been developed based on our methods and readily available to use from: http://brl.uiu.ac.bd/s2s.
{"title":"Bangla Sentence Correction Using Deep Neural Network Based Sequence to Sequence Learning","authors":"S. Islam, Mst. Farhana Sarkar, Towhid Hussain, M. Hasan, D. Farid, Swakkhar Shatabda","doi":"10.1109/ICCITECHN.2018.8631974","DOIUrl":"https://doi.org/10.1109/ICCITECHN.2018.8631974","url":null,"abstract":"Development in deep neural networks in particular to natural language processing has motivated researchers to apply these techniques in solving challenging problems like machine translation, automatic grammar checking, etc. In this paper, we address the problem of Bangla sentence correction and auto completion using decoder-encoder based sequence-to-sequence recurrent neural network with long short term memory cells. For this purpose, we have constructed a standard benchmark dataset incorporating mis-arrangement of words, missing words and sentence completion tasks. Based on the dataset we have trained our model and achieved 79% accuracy on the test dataset. We have made all our methods and datasets available for future use of the other researchers from: https://github.com/mrscp/bangla-sentence-correction. An online tool have also been developed based on our methods and readily available to use from: http://brl.uiu.ac.bd/s2s.","PeriodicalId":355984,"journal":{"name":"2018 21st International Conference of Computer and Information Technology (ICCIT)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130819429","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-12-01DOI: 10.1109/ICCITECHN.2018.8631915
Mahmudul Hasan Popel, Khan Md Hasib, Syed Ahsan Habib, Faisal Muhammad Shah
Imbalanced learning is the issue of learning from data when the class distribution is highly imbalanced. Class imbalance problems are seen increasingly in many domains and pose a challenge to traditional classification techniques. Learning from imbalanced data (two or more classes) creates additional complexities. Studies suggest that ensemble methods can produce more accurate results than regular Imbalance learning techniques (sampling and cost-sensitive learning). To deal with the problem, we propose a new hybrid under sampling based ensemble approach (HUSBoost) to handle imbalanced data which includes three basic steps- data cleaning, data balancing and classification steps. At first, we remove the noisy data using Tomek-Links. After that we create several balanced subsets by applying random under sampling (RUS) method to the majority class instances. These under sampled majority class instances and the minority class instances constitute the subsets of the imbalanced data-set. Having the same number of majority and minority class instances, they become balanced subsets of data. Then in each balanced subset, random forest (RF), AdaBoost with decision tree (CART) and AdaBoost with Support Vector Machine (SVM) are implemented in parallel where we use soft voting approach to get the combined result. From these ensemble classifiers we get the average result from all the balanced subsets. We also use 27 data-sets with different imbalanced ratio in order to verify the effectiveness of our proposed model and compare the experimental results of our model with RUSBoost and EasyEnsemble method.
{"title":"A Hybrid Under-Sampling Method (HUSBoost) to Classify Imbalanced Data","authors":"Mahmudul Hasan Popel, Khan Md Hasib, Syed Ahsan Habib, Faisal Muhammad Shah","doi":"10.1109/ICCITECHN.2018.8631915","DOIUrl":"https://doi.org/10.1109/ICCITECHN.2018.8631915","url":null,"abstract":"Imbalanced learning is the issue of learning from data when the class distribution is highly imbalanced. Class imbalance problems are seen increasingly in many domains and pose a challenge to traditional classification techniques. Learning from imbalanced data (two or more classes) creates additional complexities. Studies suggest that ensemble methods can produce more accurate results than regular Imbalance learning techniques (sampling and cost-sensitive learning). To deal with the problem, we propose a new hybrid under sampling based ensemble approach (HUSBoost) to handle imbalanced data which includes three basic steps- data cleaning, data balancing and classification steps. At first, we remove the noisy data using Tomek-Links. After that we create several balanced subsets by applying random under sampling (RUS) method to the majority class instances. These under sampled majority class instances and the minority class instances constitute the subsets of the imbalanced data-set. Having the same number of majority and minority class instances, they become balanced subsets of data. Then in each balanced subset, random forest (RF), AdaBoost with decision tree (CART) and AdaBoost with Support Vector Machine (SVM) are implemented in parallel where we use soft voting approach to get the combined result. From these ensemble classifiers we get the average result from all the balanced subsets. We also use 27 data-sets with different imbalanced ratio in order to verify the effectiveness of our proposed model and compare the experimental results of our model with RUSBoost and EasyEnsemble method.","PeriodicalId":355984,"journal":{"name":"2018 21st International Conference of Computer and Information Technology (ICCIT)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116845257","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-12-01DOI: 10.1109/ICCITECHN.2018.8631932
Md. Ashiq Mahmood, T. Latif, K. Hasan, Md. Riadul Islam
Character encoding implies representing a repertoire of characters by some sort of encoding framework. Encoding a character in a compelling procedure is in every case estimable in light of the fact that it requires a couple of bits and least investment for information. It has an enormous region of utilization including data correspondence, data stockpiling, transmission of textual information and database innovation. In this paper, a new compression technique is proposed for text data which encodes a character by 6 bits to be specific 6-Bit Text database Compression (6BC). This strategy works with a system of encoding by 6 bit for characters which are printable by utilizing a lookup table. 8 bit characters are converted into 6 bit by this procedure and it partitions the characters into 4 sets. At that point, it utilizes the location of the characters uniquely to encode it by 6 bit. This strategy is likewise utilized in database innovation by compressing the text data in a connection of a database. With the assistance of a lookup table, 6BC can compress and in addition decompress the original data. Reverse procedure for decompression to get back the original data is additionally detailed. The result of 6BC is further applied to compress by the known algorithm to be specific Huffman and LZW. Promising efficiency is appeared by our experimental result. The procedure is further demonstrated by some examples and descriptions.
{"title":"A Feasible 6 Bit Text Database Compression Scheme with Character Encoding (6BC)","authors":"Md. Ashiq Mahmood, T. Latif, K. Hasan, Md. Riadul Islam","doi":"10.1109/ICCITECHN.2018.8631932","DOIUrl":"https://doi.org/10.1109/ICCITECHN.2018.8631932","url":null,"abstract":"Character encoding implies representing a repertoire of characters by some sort of encoding framework. Encoding a character in a compelling procedure is in every case estimable in light of the fact that it requires a couple of bits and least investment for information. It has an enormous region of utilization including data correspondence, data stockpiling, transmission of textual information and database innovation. In this paper, a new compression technique is proposed for text data which encodes a character by 6 bits to be specific 6-Bit Text database Compression (6BC). This strategy works with a system of encoding by 6 bit for characters which are printable by utilizing a lookup table. 8 bit characters are converted into 6 bit by this procedure and it partitions the characters into 4 sets. At that point, it utilizes the location of the characters uniquely to encode it by 6 bit. This strategy is likewise utilized in database innovation by compressing the text data in a connection of a database. With the assistance of a lookup table, 6BC can compress and in addition decompress the original data. Reverse procedure for decompression to get back the original data is additionally detailed. The result of 6BC is further applied to compress by the known algorithm to be specific Huffman and LZW. Promising efficiency is appeared by our experimental result. The procedure is further demonstrated by some examples and descriptions.","PeriodicalId":355984,"journal":{"name":"2018 21st International Conference of Computer and Information Technology (ICCIT)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123490830","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Understanding the current health status of a population is a requirement to develop public health strategies, and the prevalence of more fast food businesses affect public health slowly and negatively. To comment on the public health status of a country, a large populations' data on many health-related aspects required. This research aims to investigate the health status of young generations of Bangladesh with common health aspects considered. Further, we investigated the potential measurement of fast food consumption behavior and the health hazard factor associated with it. For this, we have drawn a questionnaire, gathered responses and tried to figure out the insightful information from this survey analysis using data-driven methods. A total of 170 university going students, of whom 122 were male (71.76%), and 48 were female (28.23%) constitute the sample of this research. We have analyzed the data with correlation analysis and chi-squared test to understand the behavior of the features. The result concludes university students' health status and its relation with fast food consumption rate. It also makes a demographic comparison on fast food consumption rate in eight regions of Bangladesh. The results can create social awareness as well as it may help in public health-related decision making in Bangladesh.
{"title":"Impact of Fast Food Consumption on Health: A Study on University Students of Bangladesh","authors":"Md. Ridowan Chowdhury, Md. Razaul Haque Subho, Md. Maruf Rahman, Samiul Islam, Dipankar Chaki","doi":"10.1109/ICCITECHN.2018.8631962","DOIUrl":"https://doi.org/10.1109/ICCITECHN.2018.8631962","url":null,"abstract":"Understanding the current health status of a population is a requirement to develop public health strategies, and the prevalence of more fast food businesses affect public health slowly and negatively. To comment on the public health status of a country, a large populations' data on many health-related aspects required. This research aims to investigate the health status of young generations of Bangladesh with common health aspects considered. Further, we investigated the potential measurement of fast food consumption behavior and the health hazard factor associated with it. For this, we have drawn a questionnaire, gathered responses and tried to figure out the insightful information from this survey analysis using data-driven methods. A total of 170 university going students, of whom 122 were male (71.76%), and 48 were female (28.23%) constitute the sample of this research. We have analyzed the data with correlation analysis and chi-squared test to understand the behavior of the features. The result concludes university students' health status and its relation with fast food consumption rate. It also makes a demographic comparison on fast food consumption rate in eight regions of Bangladesh. The results can create social awareness as well as it may help in public health-related decision making in Bangladesh.","PeriodicalId":355984,"journal":{"name":"2018 21st International Conference of Computer and Information Technology (ICCIT)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132743473","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-12-01DOI: 10.1109/ICCITECHN.2018.8631911
Orvila Sarker, Mehedi Hasan, N. M. Istiak Chowdhury
The main challenge in any online banking system is to secure information that stored in web server, also providing extra degree of privacy to individual bank client during every transaction. Unfortunately, traditional systems do not provide the scope to hide an individual client's transaction information in the server. As a result, there is a chance of being cheated by any bank employee or the authority who are responsible behind running the system. In this work we propose a method to design a secure web server using RC4 algorithm for online banking system. In this system, we have introduced a secure money transaction process by introducing a secrete key during each transaction made by the client or user. Only the valid client or authorized user can able to access his information. For this he has to make a registration to the system by providing some basic information about himself. But it'a really important to memorize the encryption key which provides both the encryption & decryption. If a user forgets this key, he will not be able to make any transaction.
{"title":"A Secure Web Server for E- Banking","authors":"Orvila Sarker, Mehedi Hasan, N. M. Istiak Chowdhury","doi":"10.1109/ICCITECHN.2018.8631911","DOIUrl":"https://doi.org/10.1109/ICCITECHN.2018.8631911","url":null,"abstract":"The main challenge in any online banking system is to secure information that stored in web server, also providing extra degree of privacy to individual bank client during every transaction. Unfortunately, traditional systems do not provide the scope to hide an individual client's transaction information in the server. As a result, there is a chance of being cheated by any bank employee or the authority who are responsible behind running the system. In this work we propose a method to design a secure web server using RC4 algorithm for online banking system. In this system, we have introduced a secure money transaction process by introducing a secrete key during each transaction made by the client or user. Only the valid client or authorized user can able to access his information. For this he has to make a registration to the system by providing some basic information about himself. But it'a really important to memorize the encryption key which provides both the encryption & decryption. If a user forgets this key, he will not be able to make any transaction.","PeriodicalId":355984,"journal":{"name":"2018 21st International Conference of Computer and Information Technology (ICCIT)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131755422","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}