From time immemorial, Libraries have been the house of Illuminati. With the advancing world, new discoveries are being made creating new horizons of knowledge and with new and increasing researches being made there has been a proliferation of books, journals and transcripts. With this surplus production of books and journals in both existing and new areas managing them in library has become a tedious task. Looking up for a book seems like searching for a needle in a haystack. In this age of digitalization and automation we present an automated and innovative library management system which will not only automate the management process but will also ease the manual searching of books. The emphasis is on using the techniques of Optical Character Recognition to obtain a resulting system which can identify and locate books in a physical library by the use of searching algorithms.
{"title":"Global library innovative system: An image-search facilitated library system","authors":"Prashast Sahay, Ridhima Gupta, Rishi Kumar, Humera Zabin","doi":"10.1109/CONFLUENCE.2016.7508166","DOIUrl":"https://doi.org/10.1109/CONFLUENCE.2016.7508166","url":null,"abstract":"From time immemorial, Libraries have been the house of Illuminati. With the advancing world, new discoveries are being made creating new horizons of knowledge and with new and increasing researches being made there has been a proliferation of books, journals and transcripts. With this surplus production of books and journals in both existing and new areas managing them in library has become a tedious task. Looking up for a book seems like searching for a needle in a haystack. In this age of digitalization and automation we present an automated and innovative library management system which will not only automate the management process but will also ease the manual searching of books. The emphasis is on using the techniques of Optical Character Recognition to obtain a resulting system which can identify and locate books in a physical library by the use of searching algorithms.","PeriodicalId":299044,"journal":{"name":"2016 6th International Conference - Cloud System and Big Data Engineering (Confluence)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126564886","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.1109/CONFLUENCE.2016.7508122
B. Gandhi, Tanvi Sachdev, J. Jadon, Arvind Kumar
The following paper presents a relative study of a microstrip patch antenna which is excited using the inset feed strip line and the other is the microstrip fractal antenna which is excited using the coaxial feed mechanism. The antenna prototypes were designed and simulated on HFSS. Several parameters such as gain, VSWR, return loss were obtained. The characteristics obtained for Antenna 1 that is the inset fed microstrip slot patch antenna are -12.37dB, -26.31 dB and - 11.20 dB and for Antenna 2 i.e microstrip fractal antenna are 18.50dB, -24.00 dB and -27.00 dB and the gain for the two antennas are 5.29dbi and 3.32dbi respectively. The applications include Mobile and satellite communication application, Global Positioning System applications, Radar Application, Rectenna Application etc.
{"title":"Designing and performance metrics analysis of microstrip antenna and microstrip patch fractal antenna","authors":"B. Gandhi, Tanvi Sachdev, J. Jadon, Arvind Kumar","doi":"10.1109/CONFLUENCE.2016.7508122","DOIUrl":"https://doi.org/10.1109/CONFLUENCE.2016.7508122","url":null,"abstract":"The following paper presents a relative study of a microstrip patch antenna which is excited using the inset feed strip line and the other is the microstrip fractal antenna which is excited using the coaxial feed mechanism. The antenna prototypes were designed and simulated on HFSS. Several parameters such as gain, VSWR, return loss were obtained. The characteristics obtained for Antenna 1 that is the inset fed microstrip slot patch antenna are -12.37dB, -26.31 dB and - 11.20 dB and for Antenna 2 i.e microstrip fractal antenna are 18.50dB, -24.00 dB and -27.00 dB and the gain for the two antennas are 5.29dbi and 3.32dbi respectively. The applications include Mobile and satellite communication application, Global Positioning System applications, Radar Application, Rectenna Application etc.","PeriodicalId":299044,"journal":{"name":"2016 6th International Conference - Cloud System and Big Data Engineering (Confluence)","volume":"195 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122524420","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.1109/CONFLUENCE.2016.7508199
A. Singhal, D. Pandey, Renuka Nagpal, D. Mehrotra
A web document of an educational web site is a document which caters updated and latest information to the user within precise time that defines the informativeness of a web document. While navigating a web document user expects that the facts which are cited in a web document must be Complete, Current, Accurate, and Reliable. On the basis of these factors in the ensuing paper we took a survey of 6 educational web documents which was conducted by 100 students per web document in which we have enforced Factor Analysis and assertive tests which assures the adequacy and significance of the sample using SPSS tool to find the principal components of these factors. On the basis of eigenvalue above 1 of those 4 factors in the ensuing paper we have prioritized the factors on the basis of which equipped information should be commenced in a sequenced and prioritized demeanour in a web document.
{"title":"Measuring informativeness of a web document","authors":"A. Singhal, D. Pandey, Renuka Nagpal, D. Mehrotra","doi":"10.1109/CONFLUENCE.2016.7508199","DOIUrl":"https://doi.org/10.1109/CONFLUENCE.2016.7508199","url":null,"abstract":"A web document of an educational web site is a document which caters updated and latest information to the user within precise time that defines the informativeness of a web document. While navigating a web document user expects that the facts which are cited in a web document must be Complete, Current, Accurate, and Reliable. On the basis of these factors in the ensuing paper we took a survey of 6 educational web documents which was conducted by 100 students per web document in which we have enforced Factor Analysis and assertive tests which assures the adequacy and significance of the sample using SPSS tool to find the principal components of these factors. On the basis of eigenvalue above 1 of those 4 factors in the ensuing paper we have prioritized the factors on the basis of which equipped information should be commenced in a sequenced and prioritized demeanour in a web document.","PeriodicalId":299044,"journal":{"name":"2016 6th International Conference - Cloud System and Big Data Engineering (Confluence)","volume":"31 16-17","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120929986","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.1109/CONFLUENCE.2016.7508040
A. Agrawal, Arvinder Kaur
Metaheuristic algorithms are applied in various fields to solve realistic problems. In many situations, a researcher moves in perplexed situation when it comes to selection of an appropriate metaheuristic algorithm for any specific problem. To overcome from such situation a comparative study is must. Considering this view we have done the performance evaluations of three popular metaheuristic algorithms: Evolution Strategy, Tabu Search and Variable Neighborhood Search. We framed three research questions to evaluate our hypothesis. Extensive experiments are conducted and results are collected. It was observed that Variable Neighborhood Search approach performed far better than other approaches. But this result seems insufficient in presenting some conclusion. Therefore, various statistical tests such as F-test, Post-hoc tests were performed. An obvious outcome of this study is that there is an interaction effect between the problem sizes and the metaheuristic used and no clear superiority of one metaheuristic over the other.
{"title":"An empirical evaluation of three popular meta-heuristics for solving Travelling Salesman Problem","authors":"A. Agrawal, Arvinder Kaur","doi":"10.1109/CONFLUENCE.2016.7508040","DOIUrl":"https://doi.org/10.1109/CONFLUENCE.2016.7508040","url":null,"abstract":"Metaheuristic algorithms are applied in various fields to solve realistic problems. In many situations, a researcher moves in perplexed situation when it comes to selection of an appropriate metaheuristic algorithm for any specific problem. To overcome from such situation a comparative study is must. Considering this view we have done the performance evaluations of three popular metaheuristic algorithms: Evolution Strategy, Tabu Search and Variable Neighborhood Search. We framed three research questions to evaluate our hypothesis. Extensive experiments are conducted and results are collected. It was observed that Variable Neighborhood Search approach performed far better than other approaches. But this result seems insufficient in presenting some conclusion. Therefore, various statistical tests such as F-test, Post-hoc tests were performed. An obvious outcome of this study is that there is an interaction effect between the problem sizes and the metaheuristic used and no clear superiority of one metaheuristic over the other.","PeriodicalId":299044,"journal":{"name":"2016 6th International Conference - Cloud System and Big Data Engineering (Confluence)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134411888","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.1109/CONFLUENCE.2016.7508051
Bhat Jasra, Aniqa Yaqoob, S. Dubey
In this paper an efficient yet simple approach of salt and pepper noise removal based on back propagation neural network and adaptive median filtering has been suggested. The proposed method uses supervised learning capability of back-propagation neural network to remove the salt and pepper noise in first phase and adaptive median filter is used to enhance the image quality in second phase. It overcomes all drawbacks of conventional median filtering by preserving the fine details. Experimental results show that the algorithm performs better than neural network based model & other conventional filtering mechanisms. Performance is exceptionally good even for high density noised images.
{"title":"Removal of high density salt and pepper noise using BPANN-modified median filter technique","authors":"Bhat Jasra, Aniqa Yaqoob, S. Dubey","doi":"10.1109/CONFLUENCE.2016.7508051","DOIUrl":"https://doi.org/10.1109/CONFLUENCE.2016.7508051","url":null,"abstract":"In this paper an efficient yet simple approach of salt and pepper noise removal based on back propagation neural network and adaptive median filtering has been suggested. The proposed method uses supervised learning capability of back-propagation neural network to remove the salt and pepper noise in first phase and adaptive median filter is used to enhance the image quality in second phase. It overcomes all drawbacks of conventional median filtering by preserving the fine details. Experimental results show that the algorithm performs better than neural network based model & other conventional filtering mechanisms. Performance is exceptionally good even for high density noised images.","PeriodicalId":299044,"journal":{"name":"2016 6th International Conference - Cloud System and Big Data Engineering (Confluence)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133308251","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.1109/CONFLUENCE.2016.7508049
Neelima Bhatia, Arunima Jaiswal
Text summarization is an incipient practice for verdict out the summary of the text article. Text summarization has grew so uses such as Due to the enormous aggregate of information getting augmented on internet; it is challenging for the user to verve through altogether the information accessible on web. The large availability of internet content partakes constrained a broad research area in the extent of automatic text summarization contained by the Natural Language Processing (NLP), especially statistical machine learning communal. Terminated the bygone half a century, the defaulting has been addressed from numerous diverse standpoints, in erratic domains and using innumerable archetypes. In this survey paper we investigate the popular and important work done in the field of single and multiple document summarizations, generous distinctive prominence towards pragmatic approaches and extractive techniques. Particular auspicious slants that quintessence on unambiguous minutiae of the summarization are also deliberated. Exceptional consideration is ardent to involuntary assessment of summarization classifications, as forthcoming investigation on summarization is sturdily reliant over evolvement in this problem space.
{"title":"Automatic text summarization and it's methods - a review","authors":"Neelima Bhatia, Arunima Jaiswal","doi":"10.1109/CONFLUENCE.2016.7508049","DOIUrl":"https://doi.org/10.1109/CONFLUENCE.2016.7508049","url":null,"abstract":"Text summarization is an incipient practice for verdict out the summary of the text article. Text summarization has grew so uses such as Due to the enormous aggregate of information getting augmented on internet; it is challenging for the user to verve through altogether the information accessible on web. The large availability of internet content partakes constrained a broad research area in the extent of automatic text summarization contained by the Natural Language Processing (NLP), especially statistical machine learning communal. Terminated the bygone half a century, the defaulting has been addressed from numerous diverse standpoints, in erratic domains and using innumerable archetypes. In this survey paper we investigate the popular and important work done in the field of single and multiple document summarizations, generous distinctive prominence towards pragmatic approaches and extractive techniques. Particular auspicious slants that quintessence on unambiguous minutiae of the summarization are also deliberated. Exceptional consideration is ardent to involuntary assessment of summarization classifications, as forthcoming investigation on summarization is sturdily reliant over evolvement in this problem space.","PeriodicalId":299044,"journal":{"name":"2016 6th International Conference - Cloud System and Big Data Engineering (Confluence)","volume":"86 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134473299","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.1109/CONFLUENCE.2016.7508214
Vicky Chawla, Sumit Sharma
From the past two decades, the research area of nearest neighbor search in high dimensional data sets has always been in the limelight. Content-based multimedia indexing has been an active area of research as multimedia content is mapped into high-dimensional vectors of numbers, which are then stored in a high-dimensional index. For large collections, high-performance environments and large amount of main memory have been used. This paper reviews the NV-Tree (Nearest Vector Tree), a disk based data structure, which addresses the specific problem of locating the k-nearest neighbors within a collection of high dimensional data sets. The NV-tree is already used in industry to index more than 150 thousand hours of video for (very effective) near-duplicate detection. We present a critical summary of published research literature pertinent to NV-Tree under contemplation for research. The purpose is to create familiarity with existing thinking and research on a particular topic, which may justify future research into a previously overlooked or understudied area.
{"title":"A study on Nearest-Vector Tree","authors":"Vicky Chawla, Sumit Sharma","doi":"10.1109/CONFLUENCE.2016.7508214","DOIUrl":"https://doi.org/10.1109/CONFLUENCE.2016.7508214","url":null,"abstract":"From the past two decades, the research area of nearest neighbor search in high dimensional data sets has always been in the limelight. Content-based multimedia indexing has been an active area of research as multimedia content is mapped into high-dimensional vectors of numbers, which are then stored in a high-dimensional index. For large collections, high-performance environments and large amount of main memory have been used. This paper reviews the NV-Tree (Nearest Vector Tree), a disk based data structure, which addresses the specific problem of locating the k-nearest neighbors within a collection of high dimensional data sets. The NV-tree is already used in industry to index more than 150 thousand hours of video for (very effective) near-duplicate detection. We present a critical summary of published research literature pertinent to NV-Tree under contemplation for research. The purpose is to create familiarity with existing thinking and research on a particular topic, which may justify future research into a previously overlooked or understudied area.","PeriodicalId":299044,"journal":{"name":"2016 6th International Conference - Cloud System and Big Data Engineering (Confluence)","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131720254","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.1109/CONFLUENCE.2016.7723633
Rabindra Kumar Barik, P. Das, R. K. Lenka
Tourism is one of the prime areas for the economic growth rate of any country, particularly developed countries. Hence, there is an essential need to make efforts to disseminate information about the platform of tourism information details in a simplest way but in detailed manner by integrating modern technologies such as spatial technologies and web technology. Further, for the globalization in tourism sector, it is required easy to use the spatial information for the attraction of Foreign Tourist Arrivals (FTAs) and Foreign Exchange Earnings (FEEs) across the world. Therefore, it is the need to establish a Spatial Data Infrastructure (SDI) Model where each stakeholder can access, use and exchange spatial information in tourism sector. In the present work, it represents the development and implementation of an efficient and interoperable Service Oriented Architecture (SOA) based SDI Model for geospatial web services in tourism sector. The developed SDI Model allows the publishing of web service descriptions as well as to submit requests to discover the web service of user's interests. The Model supports the integration of various geospatial web services i.e. Web Features Service (WFS), Web Catalogue Service (CS-W), Web Map Service (WMS) and Web Coverage Service (WCS) in distributed platform. The open source GIS (OSGIS) software has been used for development and implementation of SOA based SDI Model. For creation and storing of spatial and non-spatial tourism database, it has been used Quantum GIS, Map Window GIS and PostGIS. It includes PHP: Hypertext Preprocessor, GeoNetwork, GeoServer and Apache Tomcat for dynamic server side scripting and imparting geospatial web services for sharing and exchange of geospatial data. The temple city, Bhubaneswar, India has been taken as the test case for Tourism Information Infrastructure Management in India.
{"title":"Development and implementation of SOA based SDI model for tourism information infrastructure management web services","authors":"Rabindra Kumar Barik, P. Das, R. K. Lenka","doi":"10.1109/CONFLUENCE.2016.7723633","DOIUrl":"https://doi.org/10.1109/CONFLUENCE.2016.7723633","url":null,"abstract":"Tourism is one of the prime areas for the economic growth rate of any country, particularly developed countries. Hence, there is an essential need to make efforts to disseminate information about the platform of tourism information details in a simplest way but in detailed manner by integrating modern technologies such as spatial technologies and web technology. Further, for the globalization in tourism sector, it is required easy to use the spatial information for the attraction of Foreign Tourist Arrivals (FTAs) and Foreign Exchange Earnings (FEEs) across the world. Therefore, it is the need to establish a Spatial Data Infrastructure (SDI) Model where each stakeholder can access, use and exchange spatial information in tourism sector. In the present work, it represents the development and implementation of an efficient and interoperable Service Oriented Architecture (SOA) based SDI Model for geospatial web services in tourism sector. The developed SDI Model allows the publishing of web service descriptions as well as to submit requests to discover the web service of user's interests. The Model supports the integration of various geospatial web services i.e. Web Features Service (WFS), Web Catalogue Service (CS-W), Web Map Service (WMS) and Web Coverage Service (WCS) in distributed platform. The open source GIS (OSGIS) software has been used for development and implementation of SOA based SDI Model. For creation and storing of spatial and non-spatial tourism database, it has been used Quantum GIS, Map Window GIS and PostGIS. It includes PHP: Hypertext Preprocessor, GeoNetwork, GeoServer and Apache Tomcat for dynamic server side scripting and imparting geospatial web services for sharing and exchange of geospatial data. The temple city, Bhubaneswar, India has been taken as the test case for Tourism Information Infrastructure Management in India.","PeriodicalId":299044,"journal":{"name":"2016 6th International Conference - Cloud System and Big Data Engineering (Confluence)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116633778","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.1109/CONFLUENCE.2016.7508210
N. Gaur, Deepika, Sagar Nandrajog, Anu Mehra
With the ever increasing demand for portability, the need of power reduction is the most important aspect of circuit designing. In this paper we propose a Low Power High Speed Charge Sharing small swing domino Comparator which significantly reduces the power dissipation. The circuit is primarily based on the small swing domino logic which lowers the voltage level of logic 1 and increases the voltage level of logic 0. Thus proposed circuit reduces the power dissipation by 68.34% at a frequency of 100MHz in comparison to charge sharing dynamic latch comparator [1] and delay is minimized to 61.80%. The circuit is implemented on 90nm and 180nm CMOS technology.
{"title":"A Low Power High Speed Charge Sharing small swing domino Comparator","authors":"N. Gaur, Deepika, Sagar Nandrajog, Anu Mehra","doi":"10.1109/CONFLUENCE.2016.7508210","DOIUrl":"https://doi.org/10.1109/CONFLUENCE.2016.7508210","url":null,"abstract":"With the ever increasing demand for portability, the need of power reduction is the most important aspect of circuit designing. In this paper we propose a Low Power High Speed Charge Sharing small swing domino Comparator which significantly reduces the power dissipation. The circuit is primarily based on the small swing domino logic which lowers the voltage level of logic 1 and increases the voltage level of logic 0. Thus proposed circuit reduces the power dissipation by 68.34% at a frequency of 100MHz in comparison to charge sharing dynamic latch comparator [1] and delay is minimized to 61.80%. The circuit is implemented on 90nm and 180nm CMOS technology.","PeriodicalId":299044,"journal":{"name":"2016 6th International Conference - Cloud System and Big Data Engineering (Confluence)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128596181","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.1109/CONFLUENCE.2016.7508178
Aditya Pan, Abhay Bansal, B. White, R. Leslie
PingER is an end to end internet performance measurement tool. It was originally developed by the SLAC National Accelerator Laboratory (SLAC). It is used as a tool to test the Digital Divide from an internet performance viewpoint. PingER measurements are done by measurement agents (MA) in approximately 60 sites in around 20 countries. A number of hosts are setup globally for target measurement purposes, and there are roughly 700 sites in over 160 countries dedicated to the same. However, using a desktop machine for the MA purpose causes a massive power drain of around 100 Watts per hour per machine. This paper discusses the implementation of an Android based mobile application to be used as a pingER monitoring agent, through which the total space and energy requirements can be minimized to a few Watts, while leveraging the advantages of Android smartphones.
{"title":"Application for the emulation of PingER on android devices","authors":"Aditya Pan, Abhay Bansal, B. White, R. Leslie","doi":"10.1109/CONFLUENCE.2016.7508178","DOIUrl":"https://doi.org/10.1109/CONFLUENCE.2016.7508178","url":null,"abstract":"PingER is an end to end internet performance measurement tool. It was originally developed by the SLAC National Accelerator Laboratory (SLAC). It is used as a tool to test the Digital Divide from an internet performance viewpoint. PingER measurements are done by measurement agents (MA) in approximately 60 sites in around 20 countries. A number of hosts are setup globally for target measurement purposes, and there are roughly 700 sites in over 160 countries dedicated to the same. However, using a desktop machine for the MA purpose causes a massive power drain of around 100 Watts per hour per machine. This paper discusses the implementation of an Android based mobile application to be used as a pingER monitoring agent, through which the total space and energy requirements can be minimized to a few Watts, while leveraging the advantages of Android smartphones.","PeriodicalId":299044,"journal":{"name":"2016 6th International Conference - Cloud System and Big Data Engineering (Confluence)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122538076","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}