Pub Date : 2021-03-17DOI: 10.1109/INDIACom51348.2021.00149
M. Banday, A. H. Mir
Forensic Odontology is as a means of human identification in forensics using dental records of individuals. A dentist extracts the information by manual comparisons between the antemortom and postmortom dental features, which is a very time consuming process. Thus, an Automatic Dental identification System is needed which can save time in identifying people especially in major catastrophes like tsunamis, airplanes crashes, fire accidents etc if conventional biometrics such as fingerprints, face, iris, palm print etc. are unavailable. This paper presents a new technique for person identification that extracts the features from mandibular bone using panoramic dental x-rays. The system first obtains the outer mandibular contour coordinates and a time series is later acquired from these extracted coordinates, which provides information about the mandibular structure. Complex Autoregression (CAR) model is then fitted to the acquired time series and the CAR coefficients thereby obtained represent the mandible features. These feature vectors acquired from mandible are then used for identification of individuals. From the experiments, it can be perceived that the performance of the system in identifying individuals using panoramic dental radiograms is good with a Recognition rate upto 79.3% and an identification rate of 80%.
{"title":"Biometric Identification System using Panoramic Dental Radiograms based on CAR Model","authors":"M. Banday, A. H. Mir","doi":"10.1109/INDIACom51348.2021.00149","DOIUrl":"https://doi.org/10.1109/INDIACom51348.2021.00149","url":null,"abstract":"Forensic Odontology is as a means of human identification in forensics using dental records of individuals. A dentist extracts the information by manual comparisons between the antemortom and postmortom dental features, which is a very time consuming process. Thus, an Automatic Dental identification System is needed which can save time in identifying people especially in major catastrophes like tsunamis, airplanes crashes, fire accidents etc if conventional biometrics such as fingerprints, face, iris, palm print etc. are unavailable. This paper presents a new technique for person identification that extracts the features from mandibular bone using panoramic dental x-rays. The system first obtains the outer mandibular contour coordinates and a time series is later acquired from these extracted coordinates, which provides information about the mandibular structure. Complex Autoregression (CAR) model is then fitted to the acquired time series and the CAR coefficients thereby obtained represent the mandible features. These feature vectors acquired from mandible are then used for identification of individuals. From the experiments, it can be perceived that the performance of the system in identifying individuals using panoramic dental radiograms is good with a Recognition rate upto 79.3% and an identification rate of 80%.","PeriodicalId":415594,"journal":{"name":"2021 8th International Conference on Computing for Sustainable Global Development (INDIACom)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128814975","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cache memory is mainly inculcated in systems to overcome the gap created in-between the main memory and CPUs due to their performance issues. Since, the speed of the processors is ever-increasing, so a need arises for a faster speed cache memory that can definitely assist in bridging the gap between the speed of processor and memory. Therefore, this paper proposes architecture circumscribed with three improvement techniques namely victim cache, sub-blocks, and memory bank. These three techniques will be implemented one after other to improve and make the speed and performance of cache comparative to main memory. Moreover, the different variables like miss penalty ratio, access speed of cache and miss rate ratio, which were already in use, are used in this paper to estimate the cache memory performance after implementation of proposed approach. After performance estimation it can be determined that proposed approach at level 1, using Victim Cache technique decreases the rate of misses, at level 2, Subblocks Division technique further reduces the penalty ratio of miss rate and then at level 3 Memory Bank Technique is useful in further decreasing memory access time. Thus, using the suggested approach, performance of Cache Memory can be improved several times.
{"title":"Cache Memory: An Analysis on Performance Issues","authors":"Sonia, Ahmad Alsharef, Pankaj Jain, Monika Arora, Syed Rameem Zahra, Gaurav Gupta","doi":"10.1109/INDIACom51348.2021.00033","DOIUrl":"https://doi.org/10.1109/INDIACom51348.2021.00033","url":null,"abstract":"Cache memory is mainly inculcated in systems to overcome the gap created in-between the main memory and CPUs due to their performance issues. Since, the speed of the processors is ever-increasing, so a need arises for a faster speed cache memory that can definitely assist in bridging the gap between the speed of processor and memory. Therefore, this paper proposes architecture circumscribed with three improvement techniques namely victim cache, sub-blocks, and memory bank. These three techniques will be implemented one after other to improve and make the speed and performance of cache comparative to main memory. Moreover, the different variables like miss penalty ratio, access speed of cache and miss rate ratio, which were already in use, are used in this paper to estimate the cache memory performance after implementation of proposed approach. After performance estimation it can be determined that proposed approach at level 1, using Victim Cache technique decreases the rate of misses, at level 2, Subblocks Division technique further reduces the penalty ratio of miss rate and then at level 3 Memory Bank Technique is useful in further decreasing memory access time. Thus, using the suggested approach, performance of Cache Memory can be improved several times.","PeriodicalId":415594,"journal":{"name":"2021 8th International Conference on Computing for Sustainable Global Development (INDIACom)","volume":"102 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116636313","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-03-17DOI: 10.1109/INDIACom51348.2021.00046
P. S. Khatoon, Muqeem Ahmed
In agrosystems, the soil plays a determining role through the multitude of ecosystem services that it provides. Agriculture exploits natural spaces and resources for useful production to humans. Soil is the primary source of nutrients for the plants. The roots absorb nutrient ions from soil and water. Additional nutrients must be mixed in the soil before the cultivation begins. Use of excess chemicals has adverse effects both on the crop and the environment. Optimal amount of nutrient requirement must be met for maximum yield. This paper presents a crop selection criteria based on K-nearest neighbour algorithm. The proposed method used farm inputs like soil type, climate, micronutrients, macronutrients, water source etc. to select the best crop suited for the farm. The algorithm also provides the amount of deficient nutrients required for the crop production. This methodology can be integrated into a semantic interoperability framework for Internet of Things (IoT) based applications. The proposed method accurately estimated the crop needs and the corresponding quantity of the nutrient necessary and sufficient to achieve a production objective defined by the yield.
{"title":"A Crop Selection Framework using K Nearest Neighbour for IoT Semantic Interoperability Applications","authors":"P. S. Khatoon, Muqeem Ahmed","doi":"10.1109/INDIACom51348.2021.00046","DOIUrl":"https://doi.org/10.1109/INDIACom51348.2021.00046","url":null,"abstract":"In agrosystems, the soil plays a determining role through the multitude of ecosystem services that it provides. Agriculture exploits natural spaces and resources for useful production to humans. Soil is the primary source of nutrients for the plants. The roots absorb nutrient ions from soil and water. Additional nutrients must be mixed in the soil before the cultivation begins. Use of excess chemicals has adverse effects both on the crop and the environment. Optimal amount of nutrient requirement must be met for maximum yield. This paper presents a crop selection criteria based on K-nearest neighbour algorithm. The proposed method used farm inputs like soil type, climate, micronutrients, macronutrients, water source etc. to select the best crop suited for the farm. The algorithm also provides the amount of deficient nutrients required for the crop production. This methodology can be integrated into a semantic interoperability framework for Internet of Things (IoT) based applications. The proposed method accurately estimated the crop needs and the corresponding quantity of the nutrient necessary and sufficient to achieve a production objective defined by the yield.","PeriodicalId":415594,"journal":{"name":"2021 8th International Conference on Computing for Sustainable Global Development (INDIACom)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115272502","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-03-17DOI: 10.1109/INDIACom51348.2021.00085
N. Patel, Dhiren R. Patel
The term ‘language’ in NLP has to be understood as natural languages like Gujarati, Hindi, English etc., which we use in daily life to communicate. Most of the NLP research has been centered on English & other European Languages. NLP research concerning the Indian language like Gujarati is commenced in the last few years. The centre of attention of this paper is to demonstrate the road map of implementation of Gujarati grammar's concept “sandhi ”. In our words sandhi is a word segmentation process & it is present in most of the South Asian language, such as Devnagri, Sanskrit, Hindi, and Gujarati & even in Chinese & Thai languages.” Sandhi leads to phonetic transformation at word boundaries of a written chunk (small part), and the sounds at the end of word join together to form a single chunk of the character sequence.” Our main spotlight is on rule-based implementation of “sandhi”. Similar to every Indian scripting language Gujarati language (Grammar) also has its own specified rules of composition for combining the consonants, vowels and modifiers. We have identified certain rules by which we accomplish the practical implementation of “sandhi ”. There are many sandhi rules available, each denoting a unique combination of phonetic transformations, documented in the grammatical tradition of Gujarati. The Sandhi does not make any syntactic or semantic changes to the words implicated. Sandhi is an elective operation that depends only on the alertness of the writer.
{"title":"Implementation Approach of Indian Language Gujarati Grammar's Concept “sandhi” using the Concepts of Rule-based NLP","authors":"N. Patel, Dhiren R. Patel","doi":"10.1109/INDIACom51348.2021.00085","DOIUrl":"https://doi.org/10.1109/INDIACom51348.2021.00085","url":null,"abstract":"The term ‘language’ in NLP has to be understood as natural languages like Gujarati, Hindi, English etc., which we use in daily life to communicate. Most of the NLP research has been centered on English & other European Languages. NLP research concerning the Indian language like Gujarati is commenced in the last few years. The centre of attention of this paper is to demonstrate the road map of implementation of Gujarati grammar's concept “sandhi ”. In our words sandhi is a word segmentation process & it is present in most of the South Asian language, such as Devnagri, Sanskrit, Hindi, and Gujarati & even in Chinese & Thai languages.” Sandhi leads to phonetic transformation at word boundaries of a written chunk (small part), and the sounds at the end of word join together to form a single chunk of the character sequence.” Our main spotlight is on rule-based implementation of “sandhi”. Similar to every Indian scripting language Gujarati language (Grammar) also has its own specified rules of composition for combining the consonants, vowels and modifiers. We have identified certain rules by which we accomplish the practical implementation of “sandhi ”. There are many sandhi rules available, each denoting a unique combination of phonetic transformations, documented in the grammatical tradition of Gujarati. The Sandhi does not make any syntactic or semantic changes to the words implicated. Sandhi is an elective operation that depends only on the alertness of the writer.","PeriodicalId":415594,"journal":{"name":"2021 8th International Conference on Computing for Sustainable Global Development (INDIACom)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115586033","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-03-17DOI: 10.1109/INDIACom51348.2021.00048
Suman Sharma, Sunil Jangid, P. Jain
Parking lot operator (PLO) can provide V2G regulation up/down services to System Operator (SO) for grid stability. Nevertheless, PLO faces multiple uncertainties in market prices viz. energy and regulation prices and mobility behavior dynamics, affecting severely its V2G operational behavior. Proposed work models integration of Price-based Demand Response Program (PBDRP) by PLO, to utilize the flexibility of EV owners, deal with the uncertainties, improve its market operations and maximize its expected profit. Proposed stochastic programming problem is formulated by modelling these uncertainties using Monte Carlo Simulation and Kantorovich Distance-based backward reduction algorithm. TOU price design by PLO from EV owners' perspective minimizes charging cost. Conditional-Value-at-Risk (CVaR) is employed as a coherent risk measure for risk-management. The results from realistic case studies illustrate that decisions based on the proposed approach provide better trade-off in terms of expected profit and risk measure.
{"title":"Stochastic Scheduling of Parking Lot Operator in Energy and Regulation Markets amalgamating PBDR","authors":"Suman Sharma, Sunil Jangid, P. Jain","doi":"10.1109/INDIACom51348.2021.00048","DOIUrl":"https://doi.org/10.1109/INDIACom51348.2021.00048","url":null,"abstract":"Parking lot operator (PLO) can provide V2G regulation up/down services to System Operator (SO) for grid stability. Nevertheless, PLO faces multiple uncertainties in market prices viz. energy and regulation prices and mobility behavior dynamics, affecting severely its V2G operational behavior. Proposed work models integration of Price-based Demand Response Program (PBDRP) by PLO, to utilize the flexibility of EV owners, deal with the uncertainties, improve its market operations and maximize its expected profit. Proposed stochastic programming problem is formulated by modelling these uncertainties using Monte Carlo Simulation and Kantorovich Distance-based backward reduction algorithm. TOU price design by PLO from EV owners' perspective minimizes charging cost. Conditional-Value-at-Risk (CVaR) is employed as a coherent risk measure for risk-management. The results from realistic case studies illustrate that decisions based on the proposed approach provide better trade-off in terms of expected profit and risk measure.","PeriodicalId":415594,"journal":{"name":"2021 8th International Conference on Computing for Sustainable Global Development (INDIACom)","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114275420","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-03-17DOI: 10.1109/INDIACom51348.2021.00152
D. Agarwal, Aditya Rastogi, Parv Rustagi, Vaibhav Nijhawan
Robotics is one of the emerging fields nowadays. It can be defined as a design gadget which can assist humans in their day-to-day activities and help them by amalgamating electronics and mechanical engineering. Robots are assuming a significant number of jobs over sectors like construction, military, medical, etc. Various attempts have been implemented to make interfaces among users and PCs put together frameworks based on human gestures. These gesture based interfaces can substitute the regular interface gadgets. In the wake of making some essential robots like a line follower robot or a computer-controlled robot, human gestures can operate these types of robots and therefore an accelerometer based Hand Gesture Robot is on the rise. This technique will reduce the dichotomy between the digital and physical world. In this paper, we will see how a robot can move by using Hand Gestures.
{"title":"Real Time RF Based Gesture Controlled Robotic Vehicle","authors":"D. Agarwal, Aditya Rastogi, Parv Rustagi, Vaibhav Nijhawan","doi":"10.1109/INDIACom51348.2021.00152","DOIUrl":"https://doi.org/10.1109/INDIACom51348.2021.00152","url":null,"abstract":"Robotics is one of the emerging fields nowadays. It can be defined as a design gadget which can assist humans in their day-to-day activities and help them by amalgamating electronics and mechanical engineering. Robots are assuming a significant number of jobs over sectors like construction, military, medical, etc. Various attempts have been implemented to make interfaces among users and PCs put together frameworks based on human gestures. These gesture based interfaces can substitute the regular interface gadgets. In the wake of making some essential robots like a line follower robot or a computer-controlled robot, human gestures can operate these types of robots and therefore an accelerometer based Hand Gesture Robot is on the rise. This technique will reduce the dichotomy between the digital and physical world. In this paper, we will see how a robot can move by using Hand Gestures.","PeriodicalId":415594,"journal":{"name":"2021 8th International Conference on Computing for Sustainable Global Development (INDIACom)","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125864786","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-03-17DOI: 10.1109/INDIACom51348.2021.00043
Anandhi Ramachandran, S. Sarbadhikari
COVID-19 has led to unprecedented utilization of digital solutions to combat the pandemic. The purpose of the current article is to present an overview of digital technology adoption to handle COVID in the country and to explore the challenges in its continuous adoption in the current post-COVID era. This work is based on the articles, news and reports collected from literature databases like Pubmed, Ebsco, news reports, and blog and government websites. The results suggest that every aspect from successful identification of the genome component of the virus, timely detection of outbreak, monitoring, containment, risk assessment, prediction, development of vaccines to dissemination of knowledge and information related to COVID-19 is targeted through use of digital tools. Emerging technologies like Artificial Intelligence, Robotics, Artificial Intelligence of Things (AIoT), Blockchain are the focus of innovations. Telemedicine is being used increasingly for consulting. Migrant population, lack of bandwidth, infrastructure, lack of scalability and sustainability of the digital solutions, unavailability of timely information are some of the challenges to adoption. Despite the barriers and challenges, there is still a vast opportunity for the leveraging technology to combat COVID-19 spread in the country with National Digital Health Mission (NDHM), Atma Nirbhar Bharat Scheme and booming entrepreneurship models. Moreover the learnings from COVID should be utilized for improving health outcomes in the post-pandemic era.
{"title":"Digital Health for the post-COVID-19 Pandemic in India: Emerging Technologies for Healthcare","authors":"Anandhi Ramachandran, S. Sarbadhikari","doi":"10.1109/INDIACom51348.2021.00043","DOIUrl":"https://doi.org/10.1109/INDIACom51348.2021.00043","url":null,"abstract":"COVID-19 has led to unprecedented utilization of digital solutions to combat the pandemic. The purpose of the current article is to present an overview of digital technology adoption to handle COVID in the country and to explore the challenges in its continuous adoption in the current post-COVID era. This work is based on the articles, news and reports collected from literature databases like Pubmed, Ebsco, news reports, and blog and government websites. The results suggest that every aspect from successful identification of the genome component of the virus, timely detection of outbreak, monitoring, containment, risk assessment, prediction, development of vaccines to dissemination of knowledge and information related to COVID-19 is targeted through use of digital tools. Emerging technologies like Artificial Intelligence, Robotics, Artificial Intelligence of Things (AIoT), Blockchain are the focus of innovations. Telemedicine is being used increasingly for consulting. Migrant population, lack of bandwidth, infrastructure, lack of scalability and sustainability of the digital solutions, unavailability of timely information are some of the challenges to adoption. Despite the barriers and challenges, there is still a vast opportunity for the leveraging technology to combat COVID-19 spread in the country with National Digital Health Mission (NDHM), Atma Nirbhar Bharat Scheme and booming entrepreneurship models. Moreover the learnings from COVID should be utilized for improving health outcomes in the post-pandemic era.","PeriodicalId":415594,"journal":{"name":"2021 8th International Conference on Computing for Sustainable Global Development (INDIACom)","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126395990","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-03-17DOI: 10.1109/INDIACom51348.2021.00056
Swati Meshram, K. Wagh
Spatial data is a collection of information about the geospatial location and its events or characteristics. These spatial data are collected from the various positioning techniques viz. Global Positioning System (GPS), remote sensing, mobile devices, etc. A large amount of easily available spatial data drives the need to effectively uncover useful and interesting patterns using machine learning algorithms like Clustering. Clustering is a technique to group geospatial data possessing similar properties, characteristics to retrieve events or patterns of significance. This paper presents a comparative analysis of various algorithms on clustering and extensions of methods, conception, and their applications in various domains. The comparative analysis revels that the Density Peak Clustering algorithm has high accuracy on the IRIS dataset Finally, we present the research opportunities in spatial data clustering in the future enhancement section.
{"title":"Mining Intelligent Spatial Clustering Patterns: A Comparative Analysis of Different Approaches","authors":"Swati Meshram, K. Wagh","doi":"10.1109/INDIACom51348.2021.00056","DOIUrl":"https://doi.org/10.1109/INDIACom51348.2021.00056","url":null,"abstract":"Spatial data is a collection of information about the geospatial location and its events or characteristics. These spatial data are collected from the various positioning techniques viz. Global Positioning System (GPS), remote sensing, mobile devices, etc. A large amount of easily available spatial data drives the need to effectively uncover useful and interesting patterns using machine learning algorithms like Clustering. Clustering is a technique to group geospatial data possessing similar properties, characteristics to retrieve events or patterns of significance. This paper presents a comparative analysis of various algorithms on clustering and extensions of methods, conception, and their applications in various domains. The comparative analysis revels that the Density Peak Clustering algorithm has high accuracy on the IRIS dataset Finally, we present the research opportunities in spatial data clustering in the future enhancement section.","PeriodicalId":415594,"journal":{"name":"2021 8th International Conference on Computing for Sustainable Global Development (INDIACom)","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126519612","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-03-17DOI: 10.1109/INDIACom51348.2021.00052
Dr.Bharanidharan G, S. Jayalakshmi
In modern Data Centers (DCs), the major significant and challengeable task is resource management of cloud and efficient allocation of Virtual Machines (VMs) or containers in Physical Machines (PMs). There are several schemes proposed regarding this factor that includes VM placement considering utilization of resources. The process of consolidation may be done efficiently using “opportunities” discovery for migrating VMs and estimating utilization of resource to VM placement. However, the deduction of energy utilized over cloud DCs by physical resources with heterogeneous mode gets accomplished using consolidation of VM. This assists in minimize of PM numbers to be utilized and rely on constraints of Quality of Services (QoS). Therefore, this paper has proposed a predictive VM placement using an efficient Learning Automata (LA) with probability distribution activity set and it can be represented as Probability Distribution Action-set Learning Automata (PDALA) which results to the VM placement over heterogeneous cloud DCs. Thus, the proposed algorithm gets beneficial by implementing LA theory and correlation coefficient parameter to generate best decision making over VM allocation. Moreover, CloudSim plus simulator is used to simulate results and the simulation output gets compared with Power Aware Best Fit Decreasing (PABFD) as reactive VM placement. The proposed PDALA method performance is evaluated using parameters like VM migration, SLA Violation and energy consumption having comparatively better performance than existing reactive VM placement.
{"title":"Predictive Virtual Machine Placement for Energy Efficient Scalable Resource Provisioning in Modern Data Centers","authors":"Dr.Bharanidharan G, S. Jayalakshmi","doi":"10.1109/INDIACom51348.2021.00052","DOIUrl":"https://doi.org/10.1109/INDIACom51348.2021.00052","url":null,"abstract":"In modern Data Centers (DCs), the major significant and challengeable task is resource management of cloud and efficient allocation of Virtual Machines (VMs) or containers in Physical Machines (PMs). There are several schemes proposed regarding this factor that includes VM placement considering utilization of resources. The process of consolidation may be done efficiently using “opportunities” discovery for migrating VMs and estimating utilization of resource to VM placement. However, the deduction of energy utilized over cloud DCs by physical resources with heterogeneous mode gets accomplished using consolidation of VM. This assists in minimize of PM numbers to be utilized and rely on constraints of Quality of Services (QoS). Therefore, this paper has proposed a predictive VM placement using an efficient Learning Automata (LA) with probability distribution activity set and it can be represented as Probability Distribution Action-set Learning Automata (PDALA) which results to the VM placement over heterogeneous cloud DCs. Thus, the proposed algorithm gets beneficial by implementing LA theory and correlation coefficient parameter to generate best decision making over VM allocation. Moreover, CloudSim plus simulator is used to simulate results and the simulation output gets compared with Power Aware Best Fit Decreasing (PABFD) as reactive VM placement. The proposed PDALA method performance is evaluated using parameters like VM migration, SLA Violation and energy consumption having comparatively better performance than existing reactive VM placement.","PeriodicalId":415594,"journal":{"name":"2021 8th International Conference on Computing for Sustainable Global Development (INDIACom)","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121914389","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-03-17DOI: 10.1109/INDIACom51348.2021.00168
Rahul Deo Sah, Sibo Prasad Patro, Neelamadhab Padhy, Nagesh Salimath
Data mining plays an important role in disease symptoms prediction. A number of diseases like prediction of heart disease, breast cancer prediction,diabetics patients analysis using data maning techniques are involved. Diabetes and their symptoms are well verse-known, as the spreading of information technology and their continued involvement in the medical and health fields. Its help to find solutions for diagnosis the dieases and treatment. Using data models to classify the dataset for predition of disease. The classification technique is to have quicker and more diverse solutions. Two algorithmic trends are Deep learning and another one Gradient Boosted Trees to achieve the predicted value 32.20 and 27.73. The Deep Learning performance is better then Gradient Boosted Trees which is appearance in the research.
{"title":"Diabetics Patients Analysis Using Deep Learning and Gradient Boosted Trees","authors":"Rahul Deo Sah, Sibo Prasad Patro, Neelamadhab Padhy, Nagesh Salimath","doi":"10.1109/INDIACom51348.2021.00168","DOIUrl":"https://doi.org/10.1109/INDIACom51348.2021.00168","url":null,"abstract":"Data mining plays an important role in disease symptoms prediction. A number of diseases like prediction of heart disease, breast cancer prediction,diabetics patients analysis using data maning techniques are involved. Diabetes and their symptoms are well verse-known, as the spreading of information technology and their continued involvement in the medical and health fields. Its help to find solutions for diagnosis the dieases and treatment. Using data models to classify the dataset for predition of disease. The classification technique is to have quicker and more diverse solutions. Two algorithmic trends are Deep learning and another one Gradient Boosted Trees to achieve the predicted value 32.20 and 27.73. The Deep Learning performance is better then Gradient Boosted Trees which is appearance in the research.","PeriodicalId":415594,"journal":{"name":"2021 8th International Conference on Computing for Sustainable Global Development (INDIACom)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131619922","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}