Pub Date : 2017-01-01DOI: 10.1109/CONFLUENCE.2017.7943251
Akash Gupta, Manohar Khatri, S. Rajput, Anu Mehra, S. Bathla
A low power two bit magnitude comparator has been proposed in the present work. The proposed magnitude comparator using the technology of coupling has been compared with the basic comparator circuit. The performance analysis of both the different comparators has been done for power consumption, delay and power delay-product (PDP) with VDD sweep. The simulations are carried on Mentor graphics (ELDO Spice) using 90nm CMOS technology at 1 V supply. The simulation results of the coupled magnitude comparator circuits is in good agreement in terms of power consumption at percentage of 60.26% in greater than function and 56.14% in lesser than f unction and 59.48% in equals to function comparators.
{"title":"Design of low power magnitude comparator","authors":"Akash Gupta, Manohar Khatri, S. Rajput, Anu Mehra, S. Bathla","doi":"10.1109/CONFLUENCE.2017.7943251","DOIUrl":"https://doi.org/10.1109/CONFLUENCE.2017.7943251","url":null,"abstract":"A low power two bit magnitude comparator has been proposed in the present work. The proposed magnitude comparator using the technology of coupling has been compared with the basic comparator circuit. The performance analysis of both the different comparators has been done for power consumption, delay and power delay-product (PDP) with VDD sweep. The simulations are carried on Mentor graphics (ELDO Spice) using 90nm CMOS technology at 1 V supply. The simulation results of the coupled magnitude comparator circuits is in good agreement in terms of power consumption at percentage of 60.26% in greater than function and 56.14% in lesser than f unction and 59.48% in equals to function comparators.","PeriodicalId":6651,"journal":{"name":"2017 7th International Conference on Cloud Computing, Data Science & Engineering - Confluence","volume":"25 1","pages":"754-758"},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81423086","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-01-01DOI: 10.1109/CONFLUENCE.2017.7943157
Hari Bhaskar Sankaranarayanan, Ravish Singh Thind
Multi-modal travel is becoming prominent amongst Indian Passengers due to the advance of low-cost air travel, increasing disposable income, and connectivity by rail, bus, and air across various cities. This is a huge opportunity for all stakeholders within transport sector like Rail, Aviation, and Surface transport to operate seamlessly to boost domestic transportation and ultimately offer passengers the best of breed travel solution. In this paper, we will propose a framework for policy analytics for Rail and Air connectivity and discuss how big data can play a key role to analyze the existing datasets like routes, schedules, booking information, benchmark studies, economic characteristics, and passenger demographics. Big data tools are very useful in processing unstructured data sets by analyzing them and providing meaningful visualizations. Policy analytics can combine the power of information technology, operations research, statistical modeling and machine learning to modernize and equip policy makers for better data-driven decisions while drafting policies. This would ultimately enable Government's vision on smart cities, seamless transport hubs, and interchanges that provide seamless connectivity and high passenger satisfaction.
{"title":"Multi-modal travel in India: A big data approach for policy analytics","authors":"Hari Bhaskar Sankaranarayanan, Ravish Singh Thind","doi":"10.1109/CONFLUENCE.2017.7943157","DOIUrl":"https://doi.org/10.1109/CONFLUENCE.2017.7943157","url":null,"abstract":"Multi-modal travel is becoming prominent amongst Indian Passengers due to the advance of low-cost air travel, increasing disposable income, and connectivity by rail, bus, and air across various cities. This is a huge opportunity for all stakeholders within transport sector like Rail, Aviation, and Surface transport to operate seamlessly to boost domestic transportation and ultimately offer passengers the best of breed travel solution. In this paper, we will propose a framework for policy analytics for Rail and Air connectivity and discuss how big data can play a key role to analyze the existing datasets like routes, schedules, booking information, benchmark studies, economic characteristics, and passenger demographics. Big data tools are very useful in processing unstructured data sets by analyzing them and providing meaningful visualizations. Policy analytics can combine the power of information technology, operations research, statistical modeling and machine learning to modernize and equip policy makers for better data-driven decisions while drafting policies. This would ultimately enable Government's vision on smart cities, seamless transport hubs, and interchanges that provide seamless connectivity and high passenger satisfaction.","PeriodicalId":6651,"journal":{"name":"2017 7th International Conference on Cloud Computing, Data Science & Engineering - Confluence","volume":"PP 1","pages":"243-248"},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84301131","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-01-01DOI: 10.1109/CONFLUENCE.2017.7943120
Rishu Chhabra, S. Verma, C. Krishna
Driver behavior is an essential component of the driver-vehicle-environment system and plays a key role in the design of the transport and vehicle systems in order to improve the efficiency and safety of human agility. The most important factors that influence driver behavior are the environment, vehicle and the driver itself. Experience, distraction, fatigue, drowsiness etc. are so me of the other factors that have an impact on driver behavior. Improper driving behavior is the leading cause of the accidents and thus, detection of driver behavior is an emerging area of research interest. This paper discusses the various techniques used for monitoring driver behavior and also classifies them into real-time and non-real time techniques. A comparative analysis was performed on the basis of advantages, disadvantages and methodology applied by various techniques for detecting driver's behavior for Intelligent Transportation Systems (ITS).
{"title":"A survey on driver behavior detection techniques for intelligent transportation systems","authors":"Rishu Chhabra, S. Verma, C. Krishna","doi":"10.1109/CONFLUENCE.2017.7943120","DOIUrl":"https://doi.org/10.1109/CONFLUENCE.2017.7943120","url":null,"abstract":"Driver behavior is an essential component of the driver-vehicle-environment system and plays a key role in the design of the transport and vehicle systems in order to improve the efficiency and safety of human agility. The most important factors that influence driver behavior are the environment, vehicle and the driver itself. Experience, distraction, fatigue, drowsiness etc. are so me of the other factors that have an impact on driver behavior. Improper driving behavior is the leading cause of the accidents and thus, detection of driver behavior is an emerging area of research interest. This paper discusses the various techniques used for monitoring driver behavior and also classifies them into real-time and non-real time techniques. A comparative analysis was performed on the basis of advantages, disadvantages and methodology applied by various techniques for detecting driver's behavior for Intelligent Transportation Systems (ITS).","PeriodicalId":6651,"journal":{"name":"2017 7th International Conference on Cloud Computing, Data Science & Engineering - Confluence","volume":"97 1","pages":"36-41"},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86661521","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-01-01DOI: 10.1109/CONFLUENCE.2017.7943162
Gursleen Kaur, Mala Kalra
Workflows have simplified the execution of complex large scale scientific applications. The cloud acts as an ideal paradigm for executing them but with many open challenges that need to be addressed for an effective workflow scheduling. Several algorithms have been proposed for workflow scheduling, but most of them fail to incorporate the key features of cloud like heterogeneous resources, pay-per-usage model, and elasticity along with the Quality of service (QoS) requirements. This paper proposes a hybrid genetic algorithm which uses the PEFT generated schedule as a seed with the aim to minimize cost while keeping execution time below the given deadline. A good seed helps to accelerate the process of obtaining an optimal solution. The algorithm is simulated on WorkflowSim and is evaluated using various scientific realistic workflows of different sizes. The experimental results validate that our approach performs better than various state of the art algorithms.
{"title":"Deadline constrained scheduling of scientific workflows on cloud using hybrid genetic algorithm","authors":"Gursleen Kaur, Mala Kalra","doi":"10.1109/CONFLUENCE.2017.7943162","DOIUrl":"https://doi.org/10.1109/CONFLUENCE.2017.7943162","url":null,"abstract":"Workflows have simplified the execution of complex large scale scientific applications. The cloud acts as an ideal paradigm for executing them but with many open challenges that need to be addressed for an effective workflow scheduling. Several algorithms have been proposed for workflow scheduling, but most of them fail to incorporate the key features of cloud like heterogeneous resources, pay-per-usage model, and elasticity along with the Quality of service (QoS) requirements. This paper proposes a hybrid genetic algorithm which uses the PEFT generated schedule as a seed with the aim to minimize cost while keeping execution time below the given deadline. A good seed helps to accelerate the process of obtaining an optimal solution. The algorithm is simulated on WorkflowSim and is evaluated using various scientific realistic workflows of different sizes. The experimental results validate that our approach performs better than various state of the art algorithms.","PeriodicalId":6651,"journal":{"name":"2017 7th International Conference on Cloud Computing, Data Science & Engineering - Confluence","volume":"151 1","pages":"276-280"},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86664533","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-01-01DOI: 10.1109/CONFLUENCE.2017.7943259
R. Sugandhi, P. Srivastava, P. Srivastav, A. Sanyasi, L. M. Awasthi, Vijaysinh Parmar, Keyur Makadia, Ishan Patel, Sandeep Shah
The data acquisition and control system (DACS) implementation for laboratory plasma experiments is a challenging task, develops gradually over time due to the: (a) rapidly evolving requirements driven by the new findings, (b) application of new ideas to the experiments, (c) interaction of the software with the specialized hardware and (d) time scales of measurement and controls. This motivates development of software based on flexible and modular architecture for the scientific computing. We have broadly classified it as: (a) base design dealing with specialized measurement hardware and (b) application design for system testing and experimentation. The role of object oriented software engineering (OOSE) is important so that developed software components could be effectively utilized by applications. The OOSE on LabVIEW graphical programming platform is a new and evolving paradigm. A demonstration of it, is achieved in Large Volume Plasma Device (LVPD) utilizing high speed PXIe bus based instrumentation using hybrid approach of OOSE and data flow programming. The LVPD is a pulsed plasma device involved in pursuing investigations ranging from excitation of wave packets of whistler time scales, relevant to space plasmas to understanding of plasma instability and transport due to electron temperature gradient (ETG) driven turbulence, relevant for fusion plasmas. The development of DACS effectively handles high acquisition cards on PXIe bus, data streaming, high channel count system design and synchronized behavior on the backplane bus. Application development include development of applications highlighting pulsed operation and data visualization including development of oscilloscope for raw and process data visualization. This paper will discuss the requirements, object oriented design, development, testing, results and lessons learned from this initiative.
{"title":"Implementation of object oriented software engineering on LabVIEW graphical design framework for data acquisition in large volume plasma device","authors":"R. Sugandhi, P. Srivastava, P. Srivastav, A. Sanyasi, L. M. Awasthi, Vijaysinh Parmar, Keyur Makadia, Ishan Patel, Sandeep Shah","doi":"10.1109/CONFLUENCE.2017.7943259","DOIUrl":"https://doi.org/10.1109/CONFLUENCE.2017.7943259","url":null,"abstract":"The data acquisition and control system (DACS) implementation for laboratory plasma experiments is a challenging task, develops gradually over time due to the: (a) rapidly evolving requirements driven by the new findings, (b) application of new ideas to the experiments, (c) interaction of the software with the specialized hardware and (d) time scales of measurement and controls. This motivates development of software based on flexible and modular architecture for the scientific computing. We have broadly classified it as: (a) base design dealing with specialized measurement hardware and (b) application design for system testing and experimentation. The role of object oriented software engineering (OOSE) is important so that developed software components could be effectively utilized by applications. The OOSE on LabVIEW graphical programming platform is a new and evolving paradigm. A demonstration of it, is achieved in Large Volume Plasma Device (LVPD) utilizing high speed PXIe bus based instrumentation using hybrid approach of OOSE and data flow programming. The LVPD is a pulsed plasma device involved in pursuing investigations ranging from excitation of wave packets of whistler time scales, relevant to space plasmas to understanding of plasma instability and transport due to electron temperature gradient (ETG) driven turbulence, relevant for fusion plasmas. The development of DACS effectively handles high acquisition cards on PXIe bus, data streaming, high channel count system design and synchronized behavior on the backplane bus. Application development include development of applications highlighting pulsed operation and data visualization including development of oscilloscope for raw and process data visualization. This paper will discuss the requirements, object oriented design, development, testing, results and lessons learned from this initiative.","PeriodicalId":6651,"journal":{"name":"2017 7th International Conference on Cloud Computing, Data Science & Engineering - Confluence","volume":"88 1","pages":"798-803"},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81382218","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-01-01DOI: 10.1109/CONFLUENCE.2017.7943206
Anshuk Dubey, S. Pal
Automated data centric technology of Cloud computing facilitate the end users through the service module, named SaaS. Where, these group of end users are either be skilled or unskilled. Recently, the most intellectual decision is to retrieve the requested data from the enormous flooded data storage through the service based cloud architecture by any type of cloud users through the remarkably efficient way using the methodologies like DBaaS, multi-tenancy, database integration. Among them, multi-tenancy and database integration can be applicable in the SaaS service model through the tightly coupled nature of service composition. But, this static service composition suffers from implementation complexity, cost factor, flexibility and scalability for further database adaptability and efficient data availability. Here, the proposed Dynamic Service Composition (abbreviated as DSC) methodology is sophisticated enough to retrieve different types of data from the multiple heterogeneous cloud databases after connectivity setup with new databases at runtime and on-demand basis. This dynamic database connectivity through the loosely coupled service composition is able to supply the requested data within a revolutionary computational speed. This methodology is able to overcome the challenges introduced by static service composition. DSC can govern multiple cloud databases through the flexible services connectivity without any information about their position in the cloud. This concept can be termed as database virtualization. Overall, the proposed DSC mechanism can monitor heterogeneous cloud databases and is responsible for significant growth over computational power for efficient data availability within a remarkable lower cost in a flexible and scalable way.
{"title":"Dynamic service composition towards database virtualization for efficient data management","authors":"Anshuk Dubey, S. Pal","doi":"10.1109/CONFLUENCE.2017.7943206","DOIUrl":"https://doi.org/10.1109/CONFLUENCE.2017.7943206","url":null,"abstract":"Automated data centric technology of Cloud computing facilitate the end users through the service module, named SaaS. Where, these group of end users are either be skilled or unskilled. Recently, the most intellectual decision is to retrieve the requested data from the enormous flooded data storage through the service based cloud architecture by any type of cloud users through the remarkably efficient way using the methodologies like DBaaS, multi-tenancy, database integration. Among them, multi-tenancy and database integration can be applicable in the SaaS service model through the tightly coupled nature of service composition. But, this static service composition suffers from implementation complexity, cost factor, flexibility and scalability for further database adaptability and efficient data availability. Here, the proposed Dynamic Service Composition (abbreviated as DSC) methodology is sophisticated enough to retrieve different types of data from the multiple heterogeneous cloud databases after connectivity setup with new databases at runtime and on-demand basis. This dynamic database connectivity through the loosely coupled service composition is able to supply the requested data within a revolutionary computational speed. This methodology is able to overcome the challenges introduced by static service composition. DSC can govern multiple cloud databases through the flexible services connectivity without any information about their position in the cloud. This concept can be termed as database virtualization. Overall, the proposed DSC mechanism can monitor heterogeneous cloud databases and is responsible for significant growth over computational power for efficient data availability within a remarkable lower cost in a flexible and scalable way.","PeriodicalId":6651,"journal":{"name":"2017 7th International Conference on Cloud Computing, Data Science & Engineering - Confluence","volume":"61 1","pages":"519-526"},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82672039","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-01-01DOI: 10.1109/CONFLUENCE.2017.7943218
Luxit Kapoor, Sanjeev Thakur
Biomedical Image Processing is a growing and demanding field. It comprises of many different types of imaging methods likes CT scans, X-Ray and MRI. These techniques allow us to identify even the smallest abnormalities in the human body. The primary goal of medical imaging is to extract meaningful and accurate information from these images with the least error possible. Out of the various types of medical imaging processes available to us, MRI is the most reliable and safe. It does not involve exposing the body to any sorts of harmful radiation. This MRI can then be processed, and the tumor can be segmented. Tumor Segmentation includes the use of several different techniques. The whole process of detecting brain tumor from an MRI can be classified into four different categories: Pre-Processing, Segmentation, Optimization and Feature Extraction. This survey involves reviewing the research by other professionals and compiling it into one paper.
{"title":"A survey on brain tumor detection using image processing techniques","authors":"Luxit Kapoor, Sanjeev Thakur","doi":"10.1109/CONFLUENCE.2017.7943218","DOIUrl":"https://doi.org/10.1109/CONFLUENCE.2017.7943218","url":null,"abstract":"Biomedical Image Processing is a growing and demanding field. It comprises of many different types of imaging methods likes CT scans, X-Ray and MRI. These techniques allow us to identify even the smallest abnormalities in the human body. The primary goal of medical imaging is to extract meaningful and accurate information from these images with the least error possible. Out of the various types of medical imaging processes available to us, MRI is the most reliable and safe. It does not involve exposing the body to any sorts of harmful radiation. This MRI can then be processed, and the tumor can be segmented. Tumor Segmentation includes the use of several different techniques. The whole process of detecting brain tumor from an MRI can be classified into four different categories: Pre-Processing, Segmentation, Optimization and Feature Extraction. This survey involves reviewing the research by other professionals and compiling it into one paper.","PeriodicalId":6651,"journal":{"name":"2017 7th International Conference on Cloud Computing, Data Science & Engineering - Confluence","volume":"24 1","pages":"582-585"},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89457796","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-01-01DOI: 10.1109/CONFLUENCE.2017.7943144
Tamanna, O. Sangwan
Accurate software reliability prediction with a single universal software reliability growth model is very difficult. In this ρ aper we reviewed different models which uses computational intelligence for the prediction purpose and describe how these techniques outperform conventional statistical models. Parameters, efficacy measures with methodologies are concluded in tabular form.
{"title":"Computational intelligence based approaches to software reliability","authors":"Tamanna, O. Sangwan","doi":"10.1109/CONFLUENCE.2017.7943144","DOIUrl":"https://doi.org/10.1109/CONFLUENCE.2017.7943144","url":null,"abstract":"Accurate software reliability prediction with a single universal software reliability growth model is very difficult. In this ρ aper we reviewed different models which uses computational intelligence for the prediction purpose and describe how these techniques outperform conventional statistical models. Parameters, efficacy measures with methodologies are concluded in tabular form.","PeriodicalId":6651,"journal":{"name":"2017 7th International Conference on Cloud Computing, Data Science & Engineering - Confluence","volume":"108 1","pages":"171-176"},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79580209","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The rapid growth of information availability on the Web related to movies, news, books, hotels, medicines, jobs etc. have increased the scope of information filtering techniques. Recommender System is software application that uses filtering techniques and algorithms to generate personalized preferences to support decision making of the users. Collaborative Filtering is one type of recommender system that finds neighbors of users on the basis of similar rated items by users or common users of items. It suffers from data sparsity and inaccuracy issues. In this paper, concept of typicality from cognitive psychology is used to find the neighbors of users on the basis of on their typicality degree in user groups. Typicality based Collaborative Filtering (TyCo) approach using K-means and Topic model based clustering is compared in terms of Mean Absolute Error (MAE).
{"title":"Recommendation generation using typicality based collaborative filtering","authors":"Sharandeep Kaur, R. Challa, Naveen Kumar, Shano Solanki, Shalini Sharma, Khushleen Kaur","doi":"10.1109/CONFLUENCE.2017.7943151","DOIUrl":"https://doi.org/10.1109/CONFLUENCE.2017.7943151","url":null,"abstract":"The rapid growth of information availability on the Web related to movies, news, books, hotels, medicines, jobs etc. have increased the scope of information filtering techniques. Recommender System is software application that uses filtering techniques and algorithms to generate personalized preferences to support decision making of the users. Collaborative Filtering is one type of recommender system that finds neighbors of users on the basis of similar rated items by users or common users of items. It suffers from data sparsity and inaccuracy issues. In this paper, concept of typicality from cognitive psychology is used to find the neighbors of users on the basis of on their typicality degree in user groups. Typicality based Collaborative Filtering (TyCo) approach using K-means and Topic model based clustering is compared in terms of Mean Absolute Error (MAE).","PeriodicalId":6651,"journal":{"name":"2017 7th International Conference on Cloud Computing, Data Science & Engineering - Confluence","volume":"86 1","pages":"210-215"},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79648711","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-01-01DOI: 10.1109/CONFLUENCE.2017.7943145
T. Pattanshetti, V. Attar
Enormous amount of data is being generated at a tremendous rate by multiple sources, often this data exists in different formats thus making it quite difficult to process the data using traditional methods. The platforms used for processing this type of data rely on distributed architecture like Cloud computing, Hadoop etc. The processing of big data can be efficiently carried out by exploring the characteristics of underlying platforms. With the advent of efficient algorithms, software metrics and by identifying the relationship amongst these measures, system characteristics can be evaluated in order to improve the overall performance of the computing system. By focusing on these measures which play important role in determining the overall performance, service level agreements can also be revised. This paper presents a survey of different performance modeling techniques of big data applications. One of the key concepts in performance modeling is finding relevant parameters which accurately represent performance of big data platforms. These extracted relevant performances measures are mapped onto software qualify concepts which are then used for defining service level agreements.
{"title":"Survey of performance modeling of big data applications","authors":"T. Pattanshetti, V. Attar","doi":"10.1109/CONFLUENCE.2017.7943145","DOIUrl":"https://doi.org/10.1109/CONFLUENCE.2017.7943145","url":null,"abstract":"Enormous amount of data is being generated at a tremendous rate by multiple sources, often this data exists in different formats thus making it quite difficult to process the data using traditional methods. The platforms used for processing this type of data rely on distributed architecture like Cloud computing, Hadoop etc. The processing of big data can be efficiently carried out by exploring the characteristics of underlying platforms. With the advent of efficient algorithms, software metrics and by identifying the relationship amongst these measures, system characteristics can be evaluated in order to improve the overall performance of the computing system. By focusing on these measures which play important role in determining the overall performance, service level agreements can also be revised. This paper presents a survey of different performance modeling techniques of big data applications. One of the key concepts in performance modeling is finding relevant parameters which accurately represent performance of big data platforms. These extracted relevant performances measures are mapped onto software qualify concepts which are then used for defining service level agreements.","PeriodicalId":6651,"journal":{"name":"2017 7th International Conference on Cloud Computing, Data Science & Engineering - Confluence","volume":"7 1","pages":"177-181"},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78906843","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}