Due to the advent of new technologies, devices, Now-a-days the amount of data produced by mankind is growing rapidly every year. Big data includes huge volume, high velocity, and extensible variety of data. The Big data is in the form of Structured data (Ex: Relational data), unstructured data (Ex: Text, PDF, Word) and Semi Structured data (Ex: XML data). It is really a tedious task to process such data through a traditional database server. Google solved this problem using an algorithm called Map Reduce, used in the technology Hadoop. In this paper we discuss some security issues in Hadoop associated with Big data. Big data applications are very much useful to Organizations, all type of Companies may be small or large scale companies and to the Industries etc.
{"title":"Security Issues in Hadoop Associated With Big Data","authors":"M. P. Kumar, Sampurnima Pattem","doi":"10.9790/0661-1903068085","DOIUrl":"https://doi.org/10.9790/0661-1903068085","url":null,"abstract":"Due to the advent of new technologies, devices, Now-a-days the amount of data produced by mankind is growing rapidly every year. Big data includes huge volume, high velocity, and extensible variety of data. The Big data is in the form of Structured data (Ex: Relational data), unstructured data (Ex: Text, PDF, Word) and Semi Structured data (Ex: XML data). It is really a tedious task to process such data through a traditional database server. Google solved this problem using an algorithm called Map Reduce, used in the technology Hadoop. In this paper we discuss some security issues in Hadoop associated with Big data. Big data applications are very much useful to Organizations, all type of Companies may be small or large scale companies and to the Industries etc.","PeriodicalId":91890,"journal":{"name":"IOSR journal of computer engineering","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85430925","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Password security is the main issue in today’s trends. Everything is based on the internet. For securing our data, files etc. we need password .Password should be secure and safe from third party and it must be safe from shoulder suffering attack, for that purpose we design an authentication scheme at application layer. To secure our password we use AES Encryption technique. This Paper shows the design of effective security issue for password by AES algorithm for encryption and decryption. It is based on AES Key Expansion in which the encryption process is a bit wise exclusive or operation of a set of strings and numbers with the 128 bit key which changes for every set of strings.
{"title":"Encryption Technique for Secure Password Authentication Scheme at Application Layer","authors":"Poonam Pandey, R. Tyagi, Dr. R. K. Bharti","doi":"10.9790/0661-1904022325","DOIUrl":"https://doi.org/10.9790/0661-1904022325","url":null,"abstract":"Password security is the main issue in today’s trends. Everything is based on the internet. For securing our data, files etc. we need password .Password should be secure and safe from third party and it must be safe from shoulder suffering attack, for that purpose we design an authentication scheme at application layer. To secure our password we use AES Encryption technique. This Paper shows the design of effective security issue for password by AES algorithm for encryption and decryption. It is based on AES Key Expansion in which the encryption process is a bit wise exclusive or operation of a set of strings and numbers with the 128 bit key which changes for every set of strings.","PeriodicalId":91890,"journal":{"name":"IOSR journal of computer engineering","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89681321","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
SMEs (Small and Medium-size Enterprises) are essential drivers for innovation and growth in India. It is observed that SMEs are picking up the most from cloud computing, as it is a complex and costlier task to set-up and run ICT (Information and Communication Technology) in a traditional way. But, many SMEs generally do not see all the information security risks involved in cloud computing and have to compromise on their data at the end. This paper provides a guidance for SMEs of India about the security aspects of cloud computing. It facilitates the decision making process through the proposal of a taxonomy which highlights the security factors to be considered when evaluating the cloud as a potential solution. The data was collected through a quantitative survey which gave inputs for forming up the taxonomy based on the scenario in India. This paper provides a roadmap to the current and potential stakeholders of cloud who need to ensure security of
{"title":"A Quantitative Analysis of Infrastructural Security Concerns in Cloud Computing for Indian SMEs","authors":"Monisha Singh, C. Kumar","doi":"10.9790/0661-1904023943","DOIUrl":"https://doi.org/10.9790/0661-1904023943","url":null,"abstract":"SMEs (Small and Medium-size Enterprises) are essential drivers for innovation and growth in India. It is observed that SMEs are picking up the most from cloud computing, as it is a complex and costlier task to set-up and run ICT (Information and Communication Technology) in a traditional way. But, many SMEs generally do not see all the information security risks involved in cloud computing and have to compromise on their data at the end. This paper provides a guidance for SMEs of India about the security aspects of cloud computing. It facilitates the decision making process through the proposal of a taxonomy which highlights the security factors to be considered when evaluating the cloud as a potential solution. The data was collected through a quantitative survey which gave inputs for forming up the taxonomy based on the scenario in India. This paper provides a roadmap to the current and potential stakeholders of cloud who need to ensure security of","PeriodicalId":91890,"journal":{"name":"IOSR journal of computer engineering","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73953892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ibrahim Baran Karasin, Dursun Bakir, M. Ülker, A. E. Ulu
Two destructive earthquakes happened on April 25 and May 12 in Nepal. These earthquakes have been ranked among the most significant ones due to their effects. The earthquakes were felt in many settlements. Nepal earthquakes caused considerable damage to buildings and to public facilities. The first damage assessment of constructions after earthquake has importance in order to prevent loss of life and property in coming earthquakes. In this study the effects of earthquakes on constructions in Nepal have been evaluated. The damages to reinforced-concrete (RC) and masonry structures after earthquake are evaluated and recommendations made in Nepal which is seismically active. The goal of this paper is to introduce major reasons for structural damages after earthquakes. The observed damages overlap with typical earthquake damages. It has been observed that the negative features of constructions have caused an increase in damage extent. The careless selection of the design and construction of structural systems has increased the damage extent.
{"title":"The Structural Damages After Nepal Earthquakes","authors":"Ibrahim Baran Karasin, Dursun Bakir, M. Ülker, A. E. Ulu","doi":"10.9790/3021-0706014554","DOIUrl":"https://doi.org/10.9790/3021-0706014554","url":null,"abstract":"Two destructive earthquakes happened on April 25 and May 12 in Nepal. These earthquakes have been ranked among the most significant ones due to their effects. The earthquakes were felt in many settlements. Nepal earthquakes caused considerable damage to buildings and to public facilities. The first damage assessment of constructions after earthquake has importance in order to prevent loss of life and property in coming earthquakes. In this study the effects of earthquakes on constructions in Nepal have been evaluated. The damages to reinforced-concrete (RC) and masonry structures after earthquake are evaluated and recommendations made in Nepal which is seismically active. The goal of this paper is to introduce major reasons for structural damages after earthquakes. The observed damages overlap with typical earthquake damages. It has been observed that the negative features of constructions have caused an increase in damage extent. The careless selection of the design and construction of structural systems has increased the damage extent.","PeriodicalId":91890,"journal":{"name":"IOSR journal of computer engineering","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76837752","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Since the origin of wireless communication it has gained huge popularity and acceptance in the whole world, providing a setup which uses minimum amount of cables for transmission of data. There has been an explosive growth in the wireless communications over the last few decades in its applications cellular telephony, wireless internet and wireless home networking arenas. As we have advanced so has the data processed and transmitted has increased exponentially leading to Big Data problems. The wireless media is now becoming less efficient in handling large amounts of data signaling to a new innovation in communications. This paper is intended to provide the reader with an overview of innovation that has been attained with the help of lasers to transmit sound and also elaborates on how it is being used along with the future developments.
{"title":"Laser Technology Improving Wireless Communication: A Comprehensive Study on Laser Communicator","authors":"S. Khanna, K. Sharma","doi":"10.9790/0661-1904022633","DOIUrl":"https://doi.org/10.9790/0661-1904022633","url":null,"abstract":"Since the origin of wireless communication it has gained huge popularity and acceptance in the whole world, providing a setup which uses minimum amount of cables for transmission of data. There has been an explosive growth in the wireless communications over the last few decades in its applications cellular telephony, wireless internet and wireless home networking arenas. As we have advanced so has the data processed and transmitted has increased exponentially leading to Big Data problems. The wireless media is now becoming less efficient in handling large amounts of data signaling to a new innovation in communications. This paper is intended to provide the reader with an overview of innovation that has been attained with the help of lasers to transmit sound and also elaborates on how it is being used along with the future developments.","PeriodicalId":91890,"journal":{"name":"IOSR journal of computer engineering","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84802705","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Knowledge Discovery in Databases (KDD) field of data mining is useful in finding trends, patterns and anomalies in the databases which is helpful to make accurate decisions for the future. Association rule mining is an important topic in data mining field. Association rule mining finds collections of data attributes that are statistically related to the data available. Apriori algorithm generates all significant association rules between items in the database. Besides, ACO algorithms are probabilistic techniques for solving computational problems that are based in finding as good as possible paths through graphs by imitating the ants’ search for food. The use of such techniques has been very successful for several problems. The collaborative use of ACO and DM (the use of ACO algorithms for DM tasks) is a very promising direction. In this paper, based on association rule mining and Apriori algorithm, an improved Ant Colony algorithm is proposed to solve the Frequent Pattern Mining problem. Ant colony algorithm is employed as evolutionary algorithm to optimize the obtained set of association rules produced using Apriori algorithm. The results and comparison of the method is shown at the end of the paper. --------------------------------------------------------------------------------------------------------------------------------------Date of Submission: 11-07-2017 Date of acceptance: 22-07-2017 --------------------------------------------------------------------------------------------------------------------------------------
数据库中的知识发现(Knowledge Discovery in Databases, KDD)是数据挖掘的一个领域,它有助于发现数据库中的趋势、模式和异常,从而为未来做出准确的决策。关联规则挖掘是数据挖掘领域的一个重要课题。关联规则挖掘查找与可用数据在统计上相关的数据属性集合。Apriori算法生成数据库中项目之间所有重要的关联规则。此外,蚁群算法是解决计算问题的概率技术,其基础是通过模仿蚂蚁寻找食物的过程,在图中找到尽可能好的路径。这种技术在解决几个问题方面非常成功。蚁群算法和DM的协同使用(在DM任务中使用蚁群算法)是一个非常有前途的方向。本文在关联规则挖掘和Apriori算法的基础上,提出了一种改进的蚁群算法来解决频繁模式挖掘问题。采用蚁群算法作为进化算法,对Apriori算法生成的关联规则集进行优化。最后给出了方法的结果和比较。-------------------------------------------------------------------------------------------------------------------------------------- 提交日期:11-07-2017验收日期:22-07-2017 --------------------------------------------------------------------------------------------------------------------------------------
{"title":"An Improved Association Rule Mining Algorithm Based on Apriori and Ant Colony approaches","authors":"Dr.Hussam M. Al Shorman, Dr.Yosef Hasan Jbara","doi":"10.9790/3021-0707011823","DOIUrl":"https://doi.org/10.9790/3021-0707011823","url":null,"abstract":"The Knowledge Discovery in Databases (KDD) field of data mining is useful in finding trends, patterns and anomalies in the databases which is helpful to make accurate decisions for the future. Association rule mining is an important topic in data mining field. Association rule mining finds collections of data attributes that are statistically related to the data available. Apriori algorithm generates all significant association rules between items in the database. Besides, ACO algorithms are probabilistic techniques for solving computational problems that are based in finding as good as possible paths through graphs by imitating the ants’ search for food. The use of such techniques has been very successful for several problems. The collaborative use of ACO and DM (the use of ACO algorithms for DM tasks) is a very promising direction. In this paper, based on association rule mining and Apriori algorithm, an improved Ant Colony algorithm is proposed to solve the Frequent Pattern Mining problem. Ant colony algorithm is employed as evolutionary algorithm to optimize the obtained set of association rules produced using Apriori algorithm. The results and comparison of the method is shown at the end of the paper. --------------------------------------------------------------------------------------------------------------------------------------Date of Submission: 11-07-2017 Date of acceptance: 22-07-2017 --------------------------------------------------------------------------------------------------------------------------------------","PeriodicalId":91890,"journal":{"name":"IOSR journal of computer engineering","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78270168","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
-The investigation of a 16x10Gbps WDM transmission system on 0.8nm channel spacing over 1680km optical link distance with Single Mode Fiber (SMF) is predicted. Two different modulation techniques namely NRZ and RZ are examined. It is analyzed the BER, Q values and eye opening factor for this model using two different modulation formats. The simulation results show the dominance of WDM transmission exploiting NRZ encoding technique for long-haul optical transmission system. It is found that Q penalty is 0.68dB for channel 1 and 0.24dB for channel 8 considering eye diagrams with respect to NRZ and RZ modulation formats.
{"title":"Simulation Investigation over 1680km WDM Transmission System Reviewing NRZ and RZ Modulation Formats","authors":"A. Gafur","doi":"10.9790/3021-0707010511","DOIUrl":"https://doi.org/10.9790/3021-0707010511","url":null,"abstract":"-The investigation of a 16x10Gbps WDM transmission system on 0.8nm channel spacing over 1680km optical link distance with Single Mode Fiber (SMF) is predicted. Two different modulation techniques namely NRZ and RZ are examined. It is analyzed the BER, Q values and eye opening factor for this model using two different modulation formats. The simulation results show the dominance of WDM transmission exploiting NRZ encoding technique for long-haul optical transmission system. It is found that Q penalty is 0.68dB for channel 1 and 0.24dB for channel 8 considering eye diagrams with respect to NRZ and RZ modulation formats.","PeriodicalId":91890,"journal":{"name":"IOSR journal of computer engineering","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81864615","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the 30 years since HIV/AIDS was first discovered, the disease has become a disturbing pandemic, taking the lives of 30 million people around the world. In 2010 alone, HIV/AIDS killed 1.8 million people, 1.2 million of whom were living in sub-Saharan Africa. In Ethiopia,HIV/AIDS is one of the key challenges for the overall development of Ethiopia, as it has led to a seven-year decrease in life expectancy and a greatly reduced workforce. Even if there are a number of voluntarily counseling and testing centers that work on HIV/AIDS prevention located in several cities of the country, they didn’t change and solve the problem related with HIV/AIDS. In addition in most of Countries counseling and Testing centers ,the data collected is simply put together and maximum used for statics purpose rather than analyzing to discover relevant and interesting previously unknown data characteristics,relationships,dependencies etc . The main objective of this study was pattern discovery and generating interesting hidden association rules from data which is taken from Marie stopes Gondar branch clinic. The contribution of this Study is by analyzing customer’s data that did HIV/AIDS test on the clinic, to identify which customer is more vulnerable to HIV/AIDS. It helps counselors in VCT centers in predicting some hidden but interesting relationships among the attributes they use during the course of counseling. For doing this, methodology such as data collection and tool selection was used. After data was collected, the main data preprocessing tasks are applied on data sets to clean data and to make it ready for experiment purpose. Out of 1992 instances of original data 1861 was made ready for the experiment. Weka3.4. tool is used for experiment and the well known association rule mining algorithm Apriori was used to extract those interesting rules from data. In order to get those interesting rules three basic experiment was conducted .Experiment I was conducted by using the whole data set. Experiment II was conducted by considering only those positive classes. Experiment III was done by only considering those positive classes but with the absence of positive class attribute. One of the result of experiments showed that customers that donot use condom during sexual intercourse and non employed person are vulnerable to HIV/AIDS.
{"title":"Pattern Discovery and Association Analysis To Identify Customer Vulnerable To HIV/AIDS: Case of Marie Stopes Gonder Branch Clinic","authors":"Fistume Tamene, Fediu Akmel, E. Birhanu, B. Siraj","doi":"10.9790/0661-1904020107","DOIUrl":"https://doi.org/10.9790/0661-1904020107","url":null,"abstract":"In the 30 years since HIV/AIDS was first discovered, the disease has become a disturbing pandemic, taking the lives of 30 million people around the world. In 2010 alone, HIV/AIDS killed 1.8 million people, 1.2 million of whom were living in sub-Saharan Africa. In Ethiopia,HIV/AIDS is one of the key challenges for the overall development of Ethiopia, as it has led to a seven-year decrease in life expectancy and a greatly reduced workforce. Even if there are a number of voluntarily counseling and testing centers that work on HIV/AIDS prevention located in several cities of the country, they didn’t change and solve the problem related with HIV/AIDS. In addition in most of Countries counseling and Testing centers ,the data collected is simply put together and maximum used for statics purpose rather than analyzing to discover relevant and interesting previously unknown data characteristics,relationships,dependencies etc . The main objective of this study was pattern discovery and generating interesting hidden association rules from data which is taken from Marie stopes Gondar branch clinic. The contribution of this Study is by analyzing customer’s data that did HIV/AIDS test on the clinic, to identify which customer is more vulnerable to HIV/AIDS. It helps counselors in VCT centers in predicting some hidden but interesting relationships among the attributes they use during the course of counseling. For doing this, methodology such as data collection and tool selection was used. After data was collected, the main data preprocessing tasks are applied on data sets to clean data and to make it ready for experiment purpose. Out of 1992 instances of original data 1861 was made ready for the experiment. Weka3.4. tool is used for experiment and the well known association rule mining algorithm Apriori was used to extract those interesting rules from data. In order to get those interesting rules three basic experiment was conducted .Experiment I was conducted by using the whole data set. Experiment II was conducted by considering only those positive classes. Experiment III was done by only considering those positive classes but with the absence of positive class attribute. One of the result of experiments showed that customers that donot use condom during sexual intercourse and non employed person are vulnerable to HIV/AIDS.","PeriodicalId":91890,"journal":{"name":"IOSR journal of computer engineering","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75116173","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The last decade has seen tremendous advancements in the field of Multi-Agent Systems. Inter-agent communication is the most integral part of any MAS. The problem of impreciseness and obscurity in agent communication has been swelled with increasing complexity of MAS and deepened with the rise of heterogeneity and diversity in varied platforms of data storage. Another technology with parallel growth is the semantic theory, significantly contributing in providing meaning to expressions thereby reducing ambiguity. The authors here provide two layers of verification of messages. At first, a vocabulary of terms used in messages is constructed with the help of an ontology in OWL to diminish vagueness. Then at a higher level business organization rules contribute to another ontology defining the policies that orchestrate the MAS establishing the domain of organization. The O-MaSE is used as MAS design methodology. In this paper, extension to O-MaSE is provided through dual verification system.
{"title":"Introducing Two Level Verification Model for Reduction of Uncertainty of Message Exchange in Inter Agent Communication in Organizational-Multi-Agent Systems Engineering, O-MaSE","authors":"G. K. Shankhdhar, M. Darbari","doi":"10.9790/0661-1904020818","DOIUrl":"https://doi.org/10.9790/0661-1904020818","url":null,"abstract":"The last decade has seen tremendous advancements in the field of Multi-Agent Systems. Inter-agent communication is the most integral part of any MAS. The problem of impreciseness and obscurity in agent communication has been swelled with increasing complexity of MAS and deepened with the rise of heterogeneity and diversity in varied platforms of data storage. Another technology with parallel growth is the semantic theory, significantly contributing in providing meaning to expressions thereby reducing ambiguity. The authors here provide two layers of verification of messages. At first, a vocabulary of terms used in messages is constructed with the help of an ontology in OWL to diminish vagueness. Then at a higher level business organization rules contribute to another ontology defining the policies that orchestrate the MAS establishing the domain of organization. The O-MaSE is used as MAS design methodology. In this paper, extension to O-MaSE is provided through dual verification system.","PeriodicalId":91890,"journal":{"name":"IOSR journal of computer engineering","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90884862","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}