Pub Date : 2017-02-01DOI: 10.1109/ICCCT2.2017.7972250
D. Amalarethinam, S. Kavitha
Computing in Cloud is a topical skill that disquiet with allocation of computing resources and services on pay- per- use basis. To access remote computers, scheduling is a key task. Scheduling of task is a NP-complete problem. It is additional obscure in cloud computing atmosphere. To achieve improved performance of cloud resource, successful and proficient scheduling methodologies are needed. This paper proposes, a Meta-task Scheduling algorithm, Priority based Performance Improved Algorithm. It considers the user priority of meta-tasks. The high priority meta-task set is scheduled based on Min-Min algorithm and then the normal priority meta-task set is scheduled based on Max-Min algorithm. The proposed algorithm gives minimum makespan and better resource utilization.
{"title":"Priority based Performance Improved Algorithm for Meta-task Scheduling in Cloud environment","authors":"D. Amalarethinam, S. Kavitha","doi":"10.1109/ICCCT2.2017.7972250","DOIUrl":"https://doi.org/10.1109/ICCCT2.2017.7972250","url":null,"abstract":"Computing in Cloud is a topical skill that disquiet with allocation of computing resources and services on pay- per- use basis. To access remote computers, scheduling is a key task. Scheduling of task is a NP-complete problem. It is additional obscure in cloud computing atmosphere. To achieve improved performance of cloud resource, successful and proficient scheduling methodologies are needed. This paper proposes, a Meta-task Scheduling algorithm, Priority based Performance Improved Algorithm. It considers the user priority of meta-tasks. The high priority meta-task set is scheduled based on Min-Min algorithm and then the normal priority meta-task set is scheduled based on Max-Min algorithm. The proposed algorithm gives minimum makespan and better resource utilization.","PeriodicalId":445567,"journal":{"name":"2017 2nd International Conference on Computing and Communications Technologies (ICCCT)","volume":"102 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126956433","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-02-01DOI: 10.1109/ICCCT2.2017.7972288
K. S. Kumar, R. Amutha, TLK. Snehapiriya
Energy efficiency is a crucial challenge in wireless sensor networks (WSN). In this paper, Turbo Coded Cooperative Communication (TCCC) has been proposed to reduce the energy consumption of the wireless sensor nodes. An energy model for Virtual Multiple-Input-Multiple-Output (V-MIMO) cooperative communication using turbo codes is also presented. The proposed technique is energy efficient as it requires lesser transmit energy per bit when compared to conventional uncoded schemes. The energy consumption of uncoded cooperative communication is compared with coded cooperative communication. The effect of code rate, number of participating nodes, channel conditions, target Bit Error Rate (BER) on the total energy consumption is also investigated. From the simulation results, it is evident that the proposed turbo coded cooperative communication with BPSK achieves energy saving of 11.36% when compared with the respective uncoded cooperative communication.
{"title":"Energy efficient V-MIMO using turbo codes in Wireless Sensor Networks","authors":"K. S. Kumar, R. Amutha, TLK. Snehapiriya","doi":"10.1109/ICCCT2.2017.7972288","DOIUrl":"https://doi.org/10.1109/ICCCT2.2017.7972288","url":null,"abstract":"Energy efficiency is a crucial challenge in wireless sensor networks (WSN). In this paper, Turbo Coded Cooperative Communication (TCCC) has been proposed to reduce the energy consumption of the wireless sensor nodes. An energy model for Virtual Multiple-Input-Multiple-Output (V-MIMO) cooperative communication using turbo codes is also presented. The proposed technique is energy efficient as it requires lesser transmit energy per bit when compared to conventional uncoded schemes. The energy consumption of uncoded cooperative communication is compared with coded cooperative communication. The effect of code rate, number of participating nodes, channel conditions, target Bit Error Rate (BER) on the total energy consumption is also investigated. From the simulation results, it is evident that the proposed turbo coded cooperative communication with BPSK achieves energy saving of 11.36% when compared with the respective uncoded cooperative communication.","PeriodicalId":445567,"journal":{"name":"2017 2nd International Conference on Computing and Communications Technologies (ICCCT)","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115391355","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-02-01DOI: 10.1109/ICCCT2.2017.7972240
S. Yamini, B. Panjavarnam
Wearable antenna facilitates various applications such as telemedicine, fire-fighting and navigation purpose. The wearable antennas are integrated into fabrics and it acts closer to the body hence the back radiations has to be low to prevent human from any harm due to the antennas radiation. Therefore it is necessary to consider the radiation characteristics of the antenna. This paper presents a dual band microstrip patch antenna for wearable applications which operates at 1800MHz and 2.45GHz. The antenna performance is described with integration of Electronic Band Gap (EBG) structure. The microstrip patch antenna and EBG structure are made up of polyester material with a dielectric constant of 1.4 and thickness of 2.85mm. Copper sheets with thickness of 35micron are used as conducting material. The back radiations of the antenna are reduced for both 1800MHz and 2.45GHz respectively after integrating with the EBG structure. The simulated return loss and radiation pattern are presented in this paper for both conditions. The simulated results shows that the radiation characteristics of the proposed design are significantly improved when compared to microstrip patch antenna without EBG. The proposed antenna has a compact size, and operates at dual band making it suitable for telemedicine use in Industrial Scientific Medical band, military and rescue system.
{"title":"Microstrip patch antenna integrated with EBG","authors":"S. Yamini, B. Panjavarnam","doi":"10.1109/ICCCT2.2017.7972240","DOIUrl":"https://doi.org/10.1109/ICCCT2.2017.7972240","url":null,"abstract":"Wearable antenna facilitates various applications such as telemedicine, fire-fighting and navigation purpose. The wearable antennas are integrated into fabrics and it acts closer to the body hence the back radiations has to be low to prevent human from any harm due to the antennas radiation. Therefore it is necessary to consider the radiation characteristics of the antenna. This paper presents a dual band microstrip patch antenna for wearable applications which operates at 1800MHz and 2.45GHz. The antenna performance is described with integration of Electronic Band Gap (EBG) structure. The microstrip patch antenna and EBG structure are made up of polyester material with a dielectric constant of 1.4 and thickness of 2.85mm. Copper sheets with thickness of 35micron are used as conducting material. The back radiations of the antenna are reduced for both 1800MHz and 2.45GHz respectively after integrating with the EBG structure. The simulated return loss and radiation pattern are presented in this paper for both conditions. The simulated results shows that the radiation characteristics of the proposed design are significantly improved when compared to microstrip patch antenna without EBG. The proposed antenna has a compact size, and operates at dual band making it suitable for telemedicine use in Industrial Scientific Medical band, military and rescue system.","PeriodicalId":445567,"journal":{"name":"2017 2nd International Conference on Computing and Communications Technologies (ICCCT)","volume":"84 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121196404","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-02-01DOI: 10.1109/ICCCT2.2017.7972260
A. R, S. Chitrakala
Social media like Twitter offers an important window into the emotions of those who use the platform to share opinions on various topics. Nearly 79% of the world population use social media to express their opinions on various topics. Various commercial organizations like E-commerce sites, health departments, disaster management activities, etc. may want to compute the emotion levels of tweets for analyzing and gaining useful insights into the user's opinions and preferences and using the result of the analysis for various purposes like determining social influence, information diffusion modeling, sentiment analysis, etc. The existing tools for computing the emotion level polarity, however, do not consider sarcasm that most predominantly exist in short texts like tweets. This paper presents a big data approach for computing emotion levels of each tweet for a given day, with handling of explicit sarcasm in tweets. The goal is to provide an efficient and, at the same time, a scalable approach for computing emotion levels in tweets.
{"title":"Explicit sarcasm handling in emotion level computation of tweets - a big data approach","authors":"A. R, S. Chitrakala","doi":"10.1109/ICCCT2.2017.7972260","DOIUrl":"https://doi.org/10.1109/ICCCT2.2017.7972260","url":null,"abstract":"Social media like Twitter offers an important window into the emotions of those who use the platform to share opinions on various topics. Nearly 79% of the world population use social media to express their opinions on various topics. Various commercial organizations like E-commerce sites, health departments, disaster management activities, etc. may want to compute the emotion levels of tweets for analyzing and gaining useful insights into the user's opinions and preferences and using the result of the analysis for various purposes like determining social influence, information diffusion modeling, sentiment analysis, etc. The existing tools for computing the emotion level polarity, however, do not consider sarcasm that most predominantly exist in short texts like tweets. This paper presents a big data approach for computing emotion levels of each tweet for a given day, with handling of explicit sarcasm in tweets. The goal is to provide an efficient and, at the same time, a scalable approach for computing emotion levels in tweets.","PeriodicalId":445567,"journal":{"name":"2017 2nd International Conference on Computing and Communications Technologies (ICCCT)","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116042244","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-02-01DOI: 10.1109/ICCCT2.2017.7972256
S. Arasu, R. Thirumalaiselvi
Kidney disease is become a popular disease in around the world. The prediction of kidney disease is highly complex task while handling huge dataset. The kidney disease dataset contain patients information such as age, blood Pressure levels, albumin, sugar, counts of red blood cells etc., in the dataset there may be some missing values in some features that values may be important to predict kidney disease. Due to such missing values in the dataset will decrease the accuracy of kidney disease prediction. Several methods were proposed to fill up these missing values. An existing classification framework used a data preprocessing method but here the data cleaning process has been made in order to fill the missing values and to correct the erroneous ones. A recalculation process is performed on the chronic Kidney disease (CKD) stages and the values were recalculated and filled in for unknown values. Though this method is efficient, the influence of expert in the field of healthcare dataset values for CKD is needed. So to avoid this need and improve the preprocessing as a layman, Weighted Average Ensemble Learning Imputation (WAELI) is proposed. In this proposed work the single value imputation model used expectation-maximization (EM) and Random Forest (RF) which predict the missing values effectively in small dataset. For huge dataset the multiple value imputation model predict the missing values with the help of RF, Classification And Regression Tree, C4.5 are used to estimate the missing value. Hence the accuracy of kidney disease prediction will be improved by using WAELI. Then introducing priority assigning algorithm to assign priority for each features in the dataset then higher priority features are carried over for classification process. This makes classification process more efficient and time consumption for classification will be reduced.
{"title":"A novel imputation method for effective prediction of coronary Kidney disease","authors":"S. Arasu, R. Thirumalaiselvi","doi":"10.1109/ICCCT2.2017.7972256","DOIUrl":"https://doi.org/10.1109/ICCCT2.2017.7972256","url":null,"abstract":"Kidney disease is become a popular disease in around the world. The prediction of kidney disease is highly complex task while handling huge dataset. The kidney disease dataset contain patients information such as age, blood Pressure levels, albumin, sugar, counts of red blood cells etc., in the dataset there may be some missing values in some features that values may be important to predict kidney disease. Due to such missing values in the dataset will decrease the accuracy of kidney disease prediction. Several methods were proposed to fill up these missing values. An existing classification framework used a data preprocessing method but here the data cleaning process has been made in order to fill the missing values and to correct the erroneous ones. A recalculation process is performed on the chronic Kidney disease (CKD) stages and the values were recalculated and filled in for unknown values. Though this method is efficient, the influence of expert in the field of healthcare dataset values for CKD is needed. So to avoid this need and improve the preprocessing as a layman, Weighted Average Ensemble Learning Imputation (WAELI) is proposed. In this proposed work the single value imputation model used expectation-maximization (EM) and Random Forest (RF) which predict the missing values effectively in small dataset. For huge dataset the multiple value imputation model predict the missing values with the help of RF, Classification And Regression Tree, C4.5 are used to estimate the missing value. Hence the accuracy of kidney disease prediction will be improved by using WAELI. Then introducing priority assigning algorithm to assign priority for each features in the dataset then higher priority features are carried over for classification process. This makes classification process more efficient and time consumption for classification will be reduced.","PeriodicalId":445567,"journal":{"name":"2017 2nd International Conference on Computing and Communications Technologies (ICCCT)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123316849","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-02-01DOI: 10.1109/ICCCT2.2017.7972264
Sankari Subbiah, S. Mala, S. Nayagam
During the age of rush in the need for big data, Hadoop is a postulate or cloud-based platform that has been heavily encouraged for all solutions in the business world's big data problems. Parallel execution of jobs consists of large data sets is done through map reduce in the hadoop cluster. The completion of job time will depend on the slowest running task in the job. The entire job is extended if one particular job takes longer time to finish and it is done by the delayer. An inequality in the measure of data allocated to each individual task is referred to as Data skewness. An efficient dynamic data splitting approach on Hadoop called the Hybrid scheduler who monitors the samples while running batch jobs and allocates resources to slaves depending on the complexity of data and the time taken for processing. In this paper, the effectiveness of web swarming is showcased using hadoop eliminating Distributed Denial of Service (DDoS) attack detection scenarios in the Web servers. Query processing is done through Map Reduce in traditional Hadoop clusters and is replaced by the proposed Block chain query processing algorithm. Thereby improvise the execution time of the assigned task in the proposed system to mitigate the data skewness. The main aim of this paper is to avoid job starvation thus minimizing the response time efficiently during the process and mitigating data skewness in existing system.
{"title":"Job starvation avoidance with alleviation of data skewness in Big Data infrastructure","authors":"Sankari Subbiah, S. Mala, S. Nayagam","doi":"10.1109/ICCCT2.2017.7972264","DOIUrl":"https://doi.org/10.1109/ICCCT2.2017.7972264","url":null,"abstract":"During the age of rush in the need for big data, Hadoop is a postulate or cloud-based platform that has been heavily encouraged for all solutions in the business world's big data problems. Parallel execution of jobs consists of large data sets is done through map reduce in the hadoop cluster. The completion of job time will depend on the slowest running task in the job. The entire job is extended if one particular job takes longer time to finish and it is done by the delayer. An inequality in the measure of data allocated to each individual task is referred to as Data skewness. An efficient dynamic data splitting approach on Hadoop called the Hybrid scheduler who monitors the samples while running batch jobs and allocates resources to slaves depending on the complexity of data and the time taken for processing. In this paper, the effectiveness of web swarming is showcased using hadoop eliminating Distributed Denial of Service (DDoS) attack detection scenarios in the Web servers. Query processing is done through Map Reduce in traditional Hadoop clusters and is replaced by the proposed Block chain query processing algorithm. Thereby improvise the execution time of the assigned task in the proposed system to mitigate the data skewness. The main aim of this paper is to avoid job starvation thus minimizing the response time efficiently during the process and mitigating data skewness in existing system.","PeriodicalId":445567,"journal":{"name":"2017 2nd International Conference on Computing and Communications Technologies (ICCCT)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122887069","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-02-01DOI: 10.1109/ICCCT2.2017.7972287
B. Renukadevi, S. Raja
DPI Management application which resides on the north-bound of SDN architecture is to analyze the application signature data from the network. The data being read and analyzed are of format JSON for effective data representation and flows provisioned from North-bound application is also of JSON format. The data analytic engine analyzes the data stored in the non-relational data base and provides the information about real-time applications used by the network users. Allows the operator to provision flows dynamically with the data from the network to allow/block flows and also to boost the bandwidth. The DPI Management application allows decoupling of application with the controller; thus providing the facility to run it in any hyper-visor within network. Able to publish SNMP trap notifications to the network operators with application threshold and flow provisioning behavior. Data purging from non-relational database at frequent intervals to remove the obsolete analyzed data.
{"title":"Deep packet inspection Management application in SDN","authors":"B. Renukadevi, S. Raja","doi":"10.1109/ICCCT2.2017.7972287","DOIUrl":"https://doi.org/10.1109/ICCCT2.2017.7972287","url":null,"abstract":"DPI Management application which resides on the north-bound of SDN architecture is to analyze the application signature data from the network. The data being read and analyzed are of format JSON for effective data representation and flows provisioned from North-bound application is also of JSON format. The data analytic engine analyzes the data stored in the non-relational data base and provides the information about real-time applications used by the network users. Allows the operator to provision flows dynamically with the data from the network to allow/block flows and also to boost the bandwidth. The DPI Management application allows decoupling of application with the controller; thus providing the facility to run it in any hyper-visor within network. Able to publish SNMP trap notifications to the network operators with application threshold and flow provisioning behavior. Data purging from non-relational database at frequent intervals to remove the obsolete analyzed data.","PeriodicalId":445567,"journal":{"name":"2017 2nd International Conference on Computing and Communications Technologies (ICCCT)","volume":"137 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124668937","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-02-01DOI: 10.1109/ICCCT2.2017.7972308
S. Sivakami, K. Janani, R. Ranjana
Drowning is usually quick and silent. Media portrayal of drowning as a loud, violent struggle have much more in common with distressed unskilled swimmers, Drowning causes the third highest number of unintentional death. Many drowning prevention systems have been proposed however they are not suitable for implementation at sea level. Our target to abide a cryptogram lapse prevents drowning at the beaches. Two sensors namely oxygen sensor and water detector are used. Water detectors are placed in the locket. This is suitable for people who play at the shore; who don't go deep into beaches. If a person gets pulled into the sea by the waves, the water detector in their locket will be submerged completely and if the detector continues to remain submerged then an alarm is triggered through which the coast guards are alerted. Oxygen sensor used for swimmers and is placed in the armbands of the swimmer. Oxygen level in the sea and the oxygen level in the atmosphere vary drastically. If a swimmer is about to drown the oxygen level will constantly be around 80 per cent (which is the normal oxygen residue in the sea); Then the sensor triggers a floating aide and simultaneously alerts the coast guards. An underwater network is deployed with a modem at each network node. The sensors communicate with these nodes. Similarly, sensors are placed on the seabed to indicate sudden increase in depth. When a user is about to cross those sensors the controller alerts the coastguards and the swimmer.
{"title":"Drowning prevention system-at sea level","authors":"S. Sivakami, K. Janani, R. Ranjana","doi":"10.1109/ICCCT2.2017.7972308","DOIUrl":"https://doi.org/10.1109/ICCCT2.2017.7972308","url":null,"abstract":"Drowning is usually quick and silent. Media portrayal of drowning as a loud, violent struggle have much more in common with distressed unskilled swimmers, Drowning causes the third highest number of unintentional death. Many drowning prevention systems have been proposed however they are not suitable for implementation at sea level. Our target to abide a cryptogram lapse prevents drowning at the beaches. Two sensors namely oxygen sensor and water detector are used. Water detectors are placed in the locket. This is suitable for people who play at the shore; who don't go deep into beaches. If a person gets pulled into the sea by the waves, the water detector in their locket will be submerged completely and if the detector continues to remain submerged then an alarm is triggered through which the coast guards are alerted. Oxygen sensor used for swimmers and is placed in the armbands of the swimmer. Oxygen level in the sea and the oxygen level in the atmosphere vary drastically. If a swimmer is about to drown the oxygen level will constantly be around 80 per cent (which is the normal oxygen residue in the sea); Then the sensor triggers a floating aide and simultaneously alerts the coast guards. An underwater network is deployed with a modem at each network node. The sensors communicate with these nodes. Similarly, sensors are placed on the seabed to indicate sudden increase in depth. When a user is about to cross those sensors the controller alerts the coastguards and the swimmer.","PeriodicalId":445567,"journal":{"name":"2017 2nd International Conference on Computing and Communications Technologies (ICCCT)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128823227","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-02-01DOI: 10.1109/ICCCT2.2017.7972281
M. Firdhous, B. Sudantha, P.M Karunaratne
In recent times indoor air quality has attracted the attention of policy makers and researchers as an important similar to that of external air pollution. In certain sense, indoor air quality must be paid more attention than outdoor air quality as people spend more time indoors than outdoors. The indoor environments are confined and closed compared to external environments providing less opportunity for the pollutants to dilute. With the advancement of technology, working places have become more automated using machines to carry out the tasks that were hitherto done manually. These devices emit various solids and gases into the environment during their operation. These emissions contain many substances that are harmful to human health, when exposed to them for a prolonged period of time or more than certain levels of concentration. This paper proposes an IoT based indoor air quality monitoring system for tracking the ozone concentrations near a photocopy machine. The experimental system with a semiconductor sensor capable of monitoring ozone concentrations was installed near a high volume photocopier. The IoT device has been programmed to collect and transmit data at an interval of five minutes over blue tooth connection to a gateway node that in turn communicates with the processing node via the WiFi local area network. The sensor was calibrated using the standard calibration methods. As an additional capability, the proposed air pollution monitoring system can generate warnings when the pollution level exceeds beyond a predetermined threshold value.
{"title":"IoT enabled proactive indoor air quality monitoring system for sustainable health management","authors":"M. Firdhous, B. Sudantha, P.M Karunaratne","doi":"10.1109/ICCCT2.2017.7972281","DOIUrl":"https://doi.org/10.1109/ICCCT2.2017.7972281","url":null,"abstract":"In recent times indoor air quality has attracted the attention of policy makers and researchers as an important similar to that of external air pollution. In certain sense, indoor air quality must be paid more attention than outdoor air quality as people spend more time indoors than outdoors. The indoor environments are confined and closed compared to external environments providing less opportunity for the pollutants to dilute. With the advancement of technology, working places have become more automated using machines to carry out the tasks that were hitherto done manually. These devices emit various solids and gases into the environment during their operation. These emissions contain many substances that are harmful to human health, when exposed to them for a prolonged period of time or more than certain levels of concentration. This paper proposes an IoT based indoor air quality monitoring system for tracking the ozone concentrations near a photocopy machine. The experimental system with a semiconductor sensor capable of monitoring ozone concentrations was installed near a high volume photocopier. The IoT device has been programmed to collect and transmit data at an interval of five minutes over blue tooth connection to a gateway node that in turn communicates with the processing node via the WiFi local area network. The sensor was calibrated using the standard calibration methods. As an additional capability, the proposed air pollution monitoring system can generate warnings when the pollution level exceeds beyond a predetermined threshold value.","PeriodicalId":445567,"journal":{"name":"2017 2nd International Conference on Computing and Communications Technologies (ICCCT)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131259491","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-02-01DOI: 10.1109/ICCCT2.2017.7972279
Priyanka C.P, Sankari Subbiah
The resource allocation is an important concept in cloud computing. It is an emerging technology in modern computing systems, it provides an on demand services because it offers dynamic allocation of resources to provide reliable and high available services to the users. To manage the actual hardware resource of the underlying Physical Machine (PM), many number of jobs (user requests) are executed on a virtual machine. Virtual Machine (VM) placement algorithm deals about placing different VMs onto the existing physical machines in an efficient manner so that the load is balanced optimally among all the hardware resources available. The Random Resource Allocation (RRA) algorithm means that the jobs are placed into the VM randomly and allocating VM into the PM randomly. This survey paper gives an overview of the existing virtual machine placement techniques and the proposed random resource allocation algorithm reduces resource wastage and power consumption and also provides load balancing in servers.
{"title":"Comparative analysis on Virtual Machine assignment algorithms","authors":"Priyanka C.P, Sankari Subbiah","doi":"10.1109/ICCCT2.2017.7972279","DOIUrl":"https://doi.org/10.1109/ICCCT2.2017.7972279","url":null,"abstract":"The resource allocation is an important concept in cloud computing. It is an emerging technology in modern computing systems, it provides an on demand services because it offers dynamic allocation of resources to provide reliable and high available services to the users. To manage the actual hardware resource of the underlying Physical Machine (PM), many number of jobs (user requests) are executed on a virtual machine. Virtual Machine (VM) placement algorithm deals about placing different VMs onto the existing physical machines in an efficient manner so that the load is balanced optimally among all the hardware resources available. The Random Resource Allocation (RRA) algorithm means that the jobs are placed into the VM randomly and allocating VM into the PM randomly. This survey paper gives an overview of the existing virtual machine placement techniques and the proposed random resource allocation algorithm reduces resource wastage and power consumption and also provides load balancing in servers.","PeriodicalId":445567,"journal":{"name":"2017 2nd International Conference on Computing and Communications Technologies (ICCCT)","volume":"321 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132421270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}