Pub Date : 2017-01-01DOI: 10.1109/CONFLUENCE.2017.7943239
Adheesh Budree, Sheril Chacko, Louis Fourie
A key tool in the fight against poverty and bridging the gap between privileged and underprivileged is access to information. Free internet connectivity is central to providing this access. This research paper focuses on the implementation of free Wi-Fi within selected underprivileged areas of South Africa through Project Isizwe, which is based on the belief that every citizen of South Africa regardless of their socio-economic conditions has the right to have good quality, affordable Internet access. With a combination of studying current available literature and analysing Project Isizwe as a case study, factors were deduced for future successful implementation of free Wi-Fi and access to information in underprivileged communities.
{"title":"Implementing free Wi-Fi in underprivileged communities: A case study of Project Isizwe","authors":"Adheesh Budree, Sheril Chacko, Louis Fourie","doi":"10.1109/CONFLUENCE.2017.7943239","DOIUrl":"https://doi.org/10.1109/CONFLUENCE.2017.7943239","url":null,"abstract":"A key tool in the fight against poverty and bridging the gap between privileged and underprivileged is access to information. Free internet connectivity is central to providing this access. This research paper focuses on the implementation of free Wi-Fi within selected underprivileged areas of South Africa through Project Isizwe, which is based on the belief that every citizen of South Africa regardless of their socio-economic conditions has the right to have good quality, affordable Internet access. With a combination of studying current available literature and analysing Project Isizwe as a case study, factors were deduced for future successful implementation of free Wi-Fi and access to information in underprivileged communities.","PeriodicalId":6651,"journal":{"name":"2017 7th International Conference on Cloud Computing, Data Science & Engineering - Confluence","volume":"85 1","pages":"687-692"},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90373712","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-01-01DOI: 10.1109/CONFLUENCE.2017.7943252
A. Mecwan, N. Devashrayee
LNA being the first block of all RF receiver systems, demands very large gain and low noise with high linearity. Lots of techniques are available to increase the gain and reduce the noise contribution of LNA. Linearity of LNA has started grabbing attention of researchers as transistor size reduces and frequency of operation goes to the microwave range. Traditional techniques of linearity improvement may not work with higher gain requirements. Derivative Superposition (DS) is one of the new methods, which seems promising for the improvement in Linearity of LNA. The paper covers the concept of DS. The issues with the simple DS method are discussed and possible improvements in the design are suggested. The challenges for each design are also presented. LNA designs using DS with possible variations are implemented and compared for Gain, Power requirement and Linearity.
{"title":"Linearity improvement of LNA using Derivative Superposition: Issues and challenges","authors":"A. Mecwan, N. Devashrayee","doi":"10.1109/CONFLUENCE.2017.7943252","DOIUrl":"https://doi.org/10.1109/CONFLUENCE.2017.7943252","url":null,"abstract":"LNA being the first block of all RF receiver systems, demands very large gain and low noise with high linearity. Lots of techniques are available to increase the gain and reduce the noise contribution of LNA. Linearity of LNA has started grabbing attention of researchers as transistor size reduces and frequency of operation goes to the microwave range. Traditional techniques of linearity improvement may not work with higher gain requirements. Derivative Superposition (DS) is one of the new methods, which seems promising for the improvement in Linearity of LNA. The paper covers the concept of DS. The issues with the simple DS method are discussed and possible improvements in the design are suggested. The challenges for each design are also presented. LNA designs using DS with possible variations are implemented and compared for Gain, Power requirement and Linearity.","PeriodicalId":6651,"journal":{"name":"2017 7th International Conference on Cloud Computing, Data Science & Engineering - Confluence","volume":"98 1","pages":"759-763"},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80812296","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-01-01DOI: 10.1109/CONFLUENCE.2017.7943129
Hina Gupta, Nitasha Hasteer, Rana Majumdar
Under the domain of text mining, Sentiment Analysis is a field that is in progress these days. Sentiment analysis is the calculative analysis of views, sentiments, opinions and positivity or negativity of a text. This paper identifies the factors that are responsible for the different sentiments of a person regarding a particular entity. In this work, the objective is to categorize the factors that influence the system of sentiment analysis due to varied sentiments of an individual. The methodology of Interpretative Structure Modeling has been employed for identifying the driving power and the dependent power of the various elements influencing sentiment analysis.
{"title":"Interpretive structure modelling(ISM) for feature dependency in sentiment analysis","authors":"Hina Gupta, Nitasha Hasteer, Rana Majumdar","doi":"10.1109/CONFLUENCE.2017.7943129","DOIUrl":"https://doi.org/10.1109/CONFLUENCE.2017.7943129","url":null,"abstract":"Under the domain of text mining, Sentiment Analysis is a field that is in progress these days. Sentiment analysis is the calculative analysis of views, sentiments, opinions and positivity or negativity of a text. This paper identifies the factors that are responsible for the different sentiments of a person regarding a particular entity. In this work, the objective is to categorize the factors that influence the system of sentiment analysis due to varied sentiments of an individual. The methodology of Interpretative Structure Modeling has been employed for identifying the driving power and the dependent power of the various elements influencing sentiment analysis.","PeriodicalId":6651,"journal":{"name":"2017 7th International Conference on Cloud Computing, Data Science & Engineering - Confluence","volume":"1 1","pages":"86-91"},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91044538","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-01-01DOI: 10.1109/CONFLUENCE.2017.7943248
C. V. Kumar, K. Sastry
Digital Signal Processing (DSP) has evolved as an integrated component in electronics advancement. Among all the DSP operations, Fast Fourier Transform (FFT) plays a prominent role in signal processing. FFT computational time reduces when the number of zero valued inputs (Z) outnumbers the non-zero valued inputs (NZ) due to unnecessary computations for Z. The above issue can be resolved by minimizing computations on Ζ by the method Pruning (Partial, Complete) in FFT. The pruning method is implemented in the hardware and computational time improvement is observed. The FFT Pruning is developed in Verilog and validated on Spartan 3E FPGA (xc3s500e-fg320-5).
数字信号处理(DSP)已成为电子技术进步的一个重要组成部分。在所有的DSP操作中,快速傅里叶变换(FFT)在信号处理中起着突出的作用。由于不必要的Z计算,当零值输入(Z)的数量超过非零值输入(NZ)时,FFT的计算时间会减少。上述问题可以通过FFT中的Pruning (Partial, Complete)方法最小化Ζ上的计算来解决。在硬件上实现了剪枝方法,计算时间得到了改善。FFT修剪是在Verilog中开发的,并在Spartan 3E FPGA (xc3s500e-fg320-5)上进行了验证。
{"title":"Design and implementation of FFT pruning algorithm on FPGA","authors":"C. V. Kumar, K. Sastry","doi":"10.1109/CONFLUENCE.2017.7943248","DOIUrl":"https://doi.org/10.1109/CONFLUENCE.2017.7943248","url":null,"abstract":"Digital Signal Processing (DSP) has evolved as an integrated component in electronics advancement. Among all the DSP operations, Fast Fourier Transform (FFT) plays a prominent role in signal processing. FFT computational time reduces when the number of zero valued inputs (Z) outnumbers the non-zero valued inputs (NZ) due to unnecessary computations for Z. The above issue can be resolved by minimizing computations on Ζ by the method Pruning (Partial, Complete) in FFT. The pruning method is implemented in the hardware and computational time improvement is observed. The FFT Pruning is developed in Verilog and validated on Spartan 3E FPGA (xc3s500e-fg320-5).","PeriodicalId":6651,"journal":{"name":"2017 7th International Conference on Cloud Computing, Data Science & Engineering - Confluence","volume":"11 1","pages":"739-743"},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75973336","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-01-01DOI: 10.1109/CONFLUENCE.2017.7943183
Akshat Tyagi, Juhi Kushwah, Monica Bhalla
Wireless Sensor Networks provide an inexpensive and power efficient solution to a majority of real world problems. This solution, however, is provided at the cost of computing ability. Due to their perks, Wireless Sensor Networks have a bright future. However, Wireless Sensor Networks are prone to security issues and attacks. Providing effective solutions to these problems is a requirement in order to ensure that Wireless Sensor Networks continue to pave way for future innovations. This paper introduces the concept of Wireless Sensor Networks, while providing a brief overview of its origins and applications. The paper also discusses the security goals, threats, attacks and constraints associated with Wireless Sensor Networks.
{"title":"Threats to security of Wireless Sensor Networks","authors":"Akshat Tyagi, Juhi Kushwah, Monica Bhalla","doi":"10.1109/CONFLUENCE.2017.7943183","DOIUrl":"https://doi.org/10.1109/CONFLUENCE.2017.7943183","url":null,"abstract":"Wireless Sensor Networks provide an inexpensive and power efficient solution to a majority of real world problems. This solution, however, is provided at the cost of computing ability. Due to their perks, Wireless Sensor Networks have a bright future. However, Wireless Sensor Networks are prone to security issues and attacks. Providing effective solutions to these problems is a requirement in order to ensure that Wireless Sensor Networks continue to pave way for future innovations. This paper introduces the concept of Wireless Sensor Networks, while providing a brief overview of its origins and applications. The paper also discusses the security goals, threats, attacks and constraints associated with Wireless Sensor Networks.","PeriodicalId":6651,"journal":{"name":"2017 7th International Conference on Cloud Computing, Data Science & Engineering - Confluence","volume":"1 1","pages":"402-405"},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73951773","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-01-01DOI: 10.1109/CONFLUENCE.2017.7943217
V. Raval, Apurva Shah
God created this universe along with all living and non-living beings. Human is one of the best among his creations and in Human beings, e yes are the gift of God to see His creations. As of now, humans are considered as the only developed creatures among His creations and have developed themselves from Stone Age to the Computing Era. As the human civilizations grew up, the transactions have moved from barter system to currency. Every country has its own currency in terms of coins and paper notes. Each of the currency of Individual County has its unique features, colors, denominations and international value. We, all, having been given two beautiful eyes could recognize the currency easily. Yet, many a times, it is not as easy as it seems to recognize a currency. The denomination can easily be recognized for a currency but it becomes difficult to identify a counterfeit currency from the real one. Especially for the blind people, it is a herculean task like finding a needle from haystack. The main motive of the work is to develop a robust image processing algorithm to identify the Indian currency, both paper-based and coins, in Indian Regional languages and convert it into a working prototype in order to provide a tool to the blind people for the same purpose.
{"title":"iCu□e — An IoT application for Indian currency recognition in vernacular languages for visually challenged people","authors":"V. Raval, Apurva Shah","doi":"10.1109/CONFLUENCE.2017.7943217","DOIUrl":"https://doi.org/10.1109/CONFLUENCE.2017.7943217","url":null,"abstract":"God created this universe along with all living and non-living beings. Human is one of the best among his creations and in Human beings, e yes are the gift of God to see His creations. As of now, humans are considered as the only developed creatures among His creations and have developed themselves from Stone Age to the Computing Era. As the human civilizations grew up, the transactions have moved from barter system to currency. Every country has its own currency in terms of coins and paper notes. Each of the currency of Individual County has its unique features, colors, denominations and international value. We, all, having been given two beautiful eyes could recognize the currency easily. Yet, many a times, it is not as easy as it seems to recognize a currency. The denomination can easily be recognized for a currency but it becomes difficult to identify a counterfeit currency from the real one. Especially for the blind people, it is a herculean task like finding a needle from haystack. The main motive of the work is to develop a robust image processing algorithm to identify the Indian currency, both paper-based and coins, in Indian Regional languages and convert it into a working prototype in order to provide a tool to the blind people for the same purpose.","PeriodicalId":6651,"journal":{"name":"2017 7th International Conference on Cloud Computing, Data Science & Engineering - Confluence","volume":"49 1","pages":"577-581"},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79249243","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-01-01DOI: 10.1109/CONFLUENCE.2017.7943210
A. Yaganteeswarudu, Y. Vishnuvardhan
Farmers are called as back bones of India because when backbone is damaged we are unable to stand. Farmers are compared as backbones because Indian economy mainly based on farmers without farmers Indian Economy will be in critical stage. Indians will proudly say India is famous for agriculture but at the same time farmers are facing many challenges during growing of crops in their land. It's an apt metaphor indeed. Farmers basically depend on the rain. Sometimes rainfall will occur according to the season but sometimes no rainfall or very less rainfall and sometimes floods may occur. Due to floods and all the crops may be damaged there will be a great loss for farmers. The farmers who borrowed the amount from others or taken loan from bank due to damage of crops or loss of crops many times committed for suicides. The main objective of this application is develop a site especially for farmers to provide interaction with the government. The application is designed with asp.net MVC. Here in this website before starting the crop the farmers should enter the details of land, the crops to be grown and expected cost for that crop. If any damage occurs then farmer should upload the corresponding videos or images in the site so that he will get the loss amount immediately from government.
{"title":"Software appication to prevent suicides of farmers with asp.net MVC","authors":"A. Yaganteeswarudu, Y. Vishnuvardhan","doi":"10.1109/CONFLUENCE.2017.7943210","DOIUrl":"https://doi.org/10.1109/CONFLUENCE.2017.7943210","url":null,"abstract":"Farmers are called as back bones of India because when backbone is damaged we are unable to stand. Farmers are compared as backbones because Indian economy mainly based on farmers without farmers Indian Economy will be in critical stage. Indians will proudly say India is famous for agriculture but at the same time farmers are facing many challenges during growing of crops in their land. It's an apt metaphor indeed. Farmers basically depend on the rain. Sometimes rainfall will occur according to the season but sometimes no rainfall or very less rainfall and sometimes floods may occur. Due to floods and all the crops may be damaged there will be a great loss for farmers. The farmers who borrowed the amount from others or taken loan from bank due to damage of crops or loss of crops many times committed for suicides. The main objective of this application is develop a site especially for farmers to provide interaction with the government. The application is designed with asp.net MVC. Here in this website before starting the crop the farmers should enter the details of land, the crops to be grown and expected cost for that crop. If any damage occurs then farmer should upload the corresponding videos or images in the site so that he will get the loss amount immediately from government.","PeriodicalId":6651,"journal":{"name":"2017 7th International Conference on Cloud Computing, Data Science & Engineering - Confluence","volume":"9 1","pages":"543-546"},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81864651","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-01-01DOI: 10.1109/CONFLUENCE.2017.7943156
Joy Dutta, Sarbani Roy
Here, we present a prototype of a smart building using newly surfacing technologies like IoT (Internet of Things), fog and cloud for the smart city. The demand for everything smart is increasing daily, but the main stumbling block is its high price. So, our aim is to improve the standard of living in home and in office with newly improved working facilities where the whole system will be automatic, efficient and will be under the control of the user via his/her smartphone or computer but the cost will stay within the budget of a common man. All these are done by the incorporation of IoT, fog and cloud. The assimilation is done using open source hardwares and softwares to reduce the cost dramatically than the other existing solutions and implement it in an impressive and ingenious way without compromising QoS (Quality of Service) of any of the functionalities provided by other existing solutions.
{"title":"IoT-fog-cloud based architecture for smart city: Prototype of a smart building","authors":"Joy Dutta, Sarbani Roy","doi":"10.1109/CONFLUENCE.2017.7943156","DOIUrl":"https://doi.org/10.1109/CONFLUENCE.2017.7943156","url":null,"abstract":"Here, we present a prototype of a smart building using newly surfacing technologies like IoT (Internet of Things), fog and cloud for the smart city. The demand for everything smart is increasing daily, but the main stumbling block is its high price. So, our aim is to improve the standard of living in home and in office with newly improved working facilities where the whole system will be automatic, efficient and will be under the control of the user via his/her smartphone or computer but the cost will stay within the budget of a common man. All these are done by the incorporation of IoT, fog and cloud. The assimilation is done using open source hardwares and softwares to reduce the cost dramatically than the other existing solutions and implement it in an impressive and ingenious way without compromising QoS (Quality of Service) of any of the functionalities provided by other existing solutions.","PeriodicalId":6651,"journal":{"name":"2017 7th International Conference on Cloud Computing, Data Science & Engineering - Confluence","volume":"7 1","pages":"237-242"},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79764036","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-01-01DOI: 10.1109/CONFLUENCE.2017.7943241
Arvinder Kaur, Shubhra Goyal Jindal
Open source bug repositories such as Bugzilla and Jira contain substantial data of numerous projects. Each project has various types of issues such as bug reports, improvement to an existing feature, and new feature of the product and task that needs to be done. Each type of issue has various attributes and obtaining such massive data manually is a tedious and time consuming process and could also lead to error prone data. Our prime focus is to collect bug reports automatically to reduce errors made due to human mistakes and improves accuracy. This paper describes a bug report collection system which automates the process of collection of bug reports from the bug repository Jira. This tool is implemented in C# which extracts the data from Jira repository using REST APIs (application program interface). REST APIs provides access to resources via URI paths. Our application makes an HTTP request and parses the response into objects. This tool automatically extracts the information of more than 100 projects of Apache maintained by Jira repository and generates information in the forms of reports. The reports generated contains several bug attributes such as bug Id, One-line description of a bug, priority assigned to bugs, components to which bug belongs to, long description of a bug, affected version, assignee of a bug and several other attributes. These reports can be used for further analysis by using some or all the attributes related to a bug. Some potential applications could be classifying the various types of bugs such as security, memory and concurrency bugs; prioritization of the bugs; prediction of severity of bugs using machine learning etc. Thus, these generated reports are useful for the researchers as they can use to analyse them in different areas such as prioritize the bugs based on the priorities assigned and also classify which types of bugs are more frequent in which type of projects and can save manual effort as well as time.
{"title":"Bug report collection system (BRCS)","authors":"Arvinder Kaur, Shubhra Goyal Jindal","doi":"10.1109/CONFLUENCE.2017.7943241","DOIUrl":"https://doi.org/10.1109/CONFLUENCE.2017.7943241","url":null,"abstract":"Open source bug repositories such as Bugzilla and Jira contain substantial data of numerous projects. Each project has various types of issues such as bug reports, improvement to an existing feature, and new feature of the product and task that needs to be done. Each type of issue has various attributes and obtaining such massive data manually is a tedious and time consuming process and could also lead to error prone data. Our prime focus is to collect bug reports automatically to reduce errors made due to human mistakes and improves accuracy. This paper describes a bug report collection system which automates the process of collection of bug reports from the bug repository Jira. This tool is implemented in C# which extracts the data from Jira repository using REST APIs (application program interface). REST APIs provides access to resources via URI paths. Our application makes an HTTP request and parses the response into objects. This tool automatically extracts the information of more than 100 projects of Apache maintained by Jira repository and generates information in the forms of reports. The reports generated contains several bug attributes such as bug Id, One-line description of a bug, priority assigned to bugs, components to which bug belongs to, long description of a bug, affected version, assignee of a bug and several other attributes. These reports can be used for further analysis by using some or all the attributes related to a bug. Some potential applications could be classifying the various types of bugs such as security, memory and concurrency bugs; prioritization of the bugs; prediction of severity of bugs using machine learning etc. Thus, these generated reports are useful for the researchers as they can use to analyse them in different areas such as prioritize the bugs based on the priorities assigned and also classify which types of bugs are more frequent in which type of projects and can save manual effort as well as time.","PeriodicalId":6651,"journal":{"name":"2017 7th International Conference on Cloud Computing, Data Science & Engineering - Confluence","volume":"06 1","pages":"697-701"},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79844302","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-01-01DOI: 10.1109/CONFLUENCE.2017.7943137
Dikscha Sapra, Rashi Sharma, A. Agarwal
This paper aims to discuss and compare various metaheuristic algorithms applied to the “Knapsack Problem”. The Knapsack Problem is a combinatorial optimization maximization problem which requires to find the number of each weighted item to be included in a hypothetical knapsack, so the total weight is less than or equal to the required weight. To come to an optimized solution for such a problem, a variety of algorithms can possibly be used. In this paper, Tabu Search, Scatter Search and Local Search algorithms are compared taking execution time, solution quality and relative difference to best known quality, as metrics to compute the results of this NP-hard problem.
{"title":"Comparative study of metaheuristic algorithms using Knapsack Problem","authors":"Dikscha Sapra, Rashi Sharma, A. Agarwal","doi":"10.1109/CONFLUENCE.2017.7943137","DOIUrl":"https://doi.org/10.1109/CONFLUENCE.2017.7943137","url":null,"abstract":"This paper aims to discuss and compare various metaheuristic algorithms applied to the “Knapsack Problem”. The Knapsack Problem is a combinatorial optimization maximization problem which requires to find the number of each weighted item to be included in a hypothetical knapsack, so the total weight is less than or equal to the required weight. To come to an optimized solution for such a problem, a variety of algorithms can possibly be used. In this paper, Tabu Search, Scatter Search and Local Search algorithms are compared taking execution time, solution quality and relative difference to best known quality, as metrics to compute the results of this NP-hard problem.","PeriodicalId":6651,"journal":{"name":"2017 7th International Conference on Cloud Computing, Data Science & Engineering - Confluence","volume":"46 1","pages":"134-137"},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82589358","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}