Pub Date : 2016-07-11DOI: 10.1109/ICGHPC.2016.7508074
J. Monisha, P. Jeba, M. Bhuvaneswari, K. Muneeswaran
Websites are the primary medium of any organization to communicate to their customers. Navigational usability and accessibility of the website are crucial to gain competitive advantage. Understanding how the customer uses the website can provide insight into their behavior. Web server logs contain latent information about usage behavior of customers. User sessions are a sequence of pages accessed by users for a specific period. The sessions are reconstructed from the web server logs. Simulated Annealing technique is used to enhance the process of identifying sessions. Considering the non-deterministic browsing behavior, soft clustering methods are used for assigning membership value for each session to belong to a cluster. A modified form of Fuzzy C-Means is used for clustering. The framework involves access log preprocessing, user identification, session identification and Mountain density function (MDF)-based fuzzy clustering. The obtained clusters represent common navigational behavior among the users.
{"title":"Extracting usage patterns from web server log","authors":"J. Monisha, P. Jeba, M. Bhuvaneswari, K. Muneeswaran","doi":"10.1109/ICGHPC.2016.7508074","DOIUrl":"https://doi.org/10.1109/ICGHPC.2016.7508074","url":null,"abstract":"Websites are the primary medium of any organization to communicate to their customers. Navigational usability and accessibility of the website are crucial to gain competitive advantage. Understanding how the customer uses the website can provide insight into their behavior. Web server logs contain latent information about usage behavior of customers. User sessions are a sequence of pages accessed by users for a specific period. The sessions are reconstructed from the web server logs. Simulated Annealing technique is used to enhance the process of identifying sessions. Considering the non-deterministic browsing behavior, soft clustering methods are used for assigning membership value for each session to belong to a cluster. A modified form of Fuzzy C-Means is used for clustering. The framework involves access log preprocessing, user identification, session identification and Mountain density function (MDF)-based fuzzy clustering. The obtained clusters represent common navigational behavior among the users.","PeriodicalId":268630,"journal":{"name":"2016 2nd International Conference on Green High Performance Computing (ICGHPC)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126336179","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-02-01DOI: 10.1109/ICGHPC.2016.7508063
P. Lalley, T. Latha
Embedded System plays a vital role in consumer Industry. Complex applications need systems which contains multiple heterogeneous processors, running in parallel to speed up the system. Also due to area constraints, the processors are evolved in a single System on Chip called Multiprocessor System on Chip (MPSoC). The system should be reusable and debuggable, hence the designers designed and developed Reconfigurable MPSoCs rather than Application Specific Integrated Circuits (ASIC) in Field Programmable Gate Arrays (FPGA). Multiprocessor System on Chip (MPSoC) platform plays a vital role in parallel processor architecture design. However the growth of number of processing elements in one chip, task decomposition and scheduling become major bottlenecks of MPSoC architecture. To execute the applications, the application software is splitted as tasks and mapped to the different available processors and scheduled the tasks as when to execute in the available processors when the resources are ready. Selection of most suitable candidates for execution in a particular processor is very much important. Hardware related tasks are executed in different hardware accelerators and software tasks in processors. The area occupied by the schedulers in memory is more in internal memory. For scheduling these tasks, a programmable hardware is developed as hardware scheduler in the reconfigurable MPSoC using NIOS II processor. The algorithm for optimized scheduling in the target architecture is proposed. The literature survey is made with the hardware scheduler and new target MPSoC architecture. Quartus II version 12.1 and SOPC Builder are used to configure the NIOS II processer. Nios II EDS software tool has been used to build the application code.
嵌入式系统在消费行业中起着至关重要的作用。复杂的应用程序需要包含多个异构处理器的系统,并行运行以提高系统速度。同样由于面积的限制,处理器在称为多处理器片上系统(MPSoC)的单片系统中发展。系统应该是可重复使用和可调试的,因此设计人员设计和开发了可重构mpsoc,而不是现场可编程门阵列(FPGA)中的应用专用集成电路(ASIC)。多处理器片上系统(MPSoC)平台在并行处理器架构设计中起着至关重要的作用。然而,单片处理单元数量的增长、任务分解和调度成为MPSoC架构的主要瓶颈。为了执行应用程序,应用程序软件被分割为任务,并映射到不同的可用处理器,并在资源就绪时调度任务在可用处理器中执行。为在特定处理器中执行选择最合适的候选程序是非常重要的。与硬件相关的任务在不同的硬件加速器中执行,而在处理器中执行软件任务。调度器在内存中占用的区域更多是在内部内存中。为了调度这些任务,在可重构MPSoC中使用NIOS II处理器开发了一个可编程硬件作为硬件调度程序。提出了目标体系结构下的优化调度算法。对硬件调度器和新的目标MPSoC体系结构进行了文献综述。使用Quartus II版本12.1和SOPC Builder配置NIOS II处理器。已使用Nios II EDS软件工具构建应用程序代码。
{"title":"Optimized programmable hardware scheduler for reconfigurable MPSoCs","authors":"P. Lalley, T. Latha","doi":"10.1109/ICGHPC.2016.7508063","DOIUrl":"https://doi.org/10.1109/ICGHPC.2016.7508063","url":null,"abstract":"Embedded System plays a vital role in consumer Industry. Complex applications need systems which contains multiple heterogeneous processors, running in parallel to speed up the system. Also due to area constraints, the processors are evolved in a single System on Chip called Multiprocessor System on Chip (MPSoC). The system should be reusable and debuggable, hence the designers designed and developed Reconfigurable MPSoCs rather than Application Specific Integrated Circuits (ASIC) in Field Programmable Gate Arrays (FPGA). Multiprocessor System on Chip (MPSoC) platform plays a vital role in parallel processor architecture design. However the growth of number of processing elements in one chip, task decomposition and scheduling become major bottlenecks of MPSoC architecture. To execute the applications, the application software is splitted as tasks and mapped to the different available processors and scheduled the tasks as when to execute in the available processors when the resources are ready. Selection of most suitable candidates for execution in a particular processor is very much important. Hardware related tasks are executed in different hardware accelerators and software tasks in processors. The area occupied by the schedulers in memory is more in internal memory. For scheduling these tasks, a programmable hardware is developed as hardware scheduler in the reconfigurable MPSoC using NIOS II processor. The algorithm for optimized scheduling in the target architecture is proposed. The literature survey is made with the hardware scheduler and new target MPSoC architecture. Quartus II version 12.1 and SOPC Builder are used to configure the NIOS II processer. Nios II EDS software tool has been used to build the application code.","PeriodicalId":268630,"journal":{"name":"2016 2nd International Conference on Green High Performance Computing (ICGHPC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127521149","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-02-01DOI: 10.1109/ICGHPC.2016.7508067
D. P. Mahato, A. Maurya, A. Tripathi, Ravi Shankar Singh
Dynamic and decentralized load balancing in transaction oriented grid service is a challenge due to its heterogeneous, real-time, autonomous and adaptive nature. The execution of these services increases the loads on the processing nodes or the required resources at the time of task recovery from failures. The task recovery may be of two types: local level and replicated level. In both the cases, the job queues at global nodes and local nodes are crowded with incoming new and older failed tasks. This paper presents a sender-initiated dynamic and adaptive load balancing approach (SI-DALB) based model using hypercube topology. The proposed model is based on Coloured Petri Nets (CPNs) and uses decentralized approach to balance and manage the load distributions among resources. Experimental results are validated and compared with the model consisting NoLB (No load balancing) approach. The experimental results show that the proposed algorithm is better and effective in distributing and balancing the loads of transaction oriented grid service.
由于面向事务的网格服务具有异构性、实时性、自治性和自适应性等特点,动态和分散的负载平衡是一个挑战。这些服务的执行增加了处理节点的负载,或者在任务从故障中恢复时所需的资源。任务恢复有两种类型:本地级和复制级。在这两种情况下,全局节点和本地节点上的作业队列都挤满了传入的新任务和旧的失败任务。提出了一种基于超立方体拓扑结构的动态自适应负载均衡模型。该模型基于彩色Petri网(cpn),并使用分散的方法来平衡和管理资源之间的负载分布。对实验结果进行了验证,并与NoLB (No load balancing)方法组成的模型进行了比较。实验结果表明,该算法能够较好地实现面向事务的网格服务负载的分配和均衡。
{"title":"Dynamic and adaptive load balancing in transaction oriented grid service","authors":"D. P. Mahato, A. Maurya, A. Tripathi, Ravi Shankar Singh","doi":"10.1109/ICGHPC.2016.7508067","DOIUrl":"https://doi.org/10.1109/ICGHPC.2016.7508067","url":null,"abstract":"Dynamic and decentralized load balancing in transaction oriented grid service is a challenge due to its heterogeneous, real-time, autonomous and adaptive nature. The execution of these services increases the loads on the processing nodes or the required resources at the time of task recovery from failures. The task recovery may be of two types: local level and replicated level. In both the cases, the job queues at global nodes and local nodes are crowded with incoming new and older failed tasks. This paper presents a sender-initiated dynamic and adaptive load balancing approach (SI-DALB) based model using hypercube topology. The proposed model is based on Coloured Petri Nets (CPNs) and uses decentralized approach to balance and manage the load distributions among resources. Experimental results are validated and compared with the model consisting NoLB (No load balancing) approach. The experimental results show that the proposed algorithm is better and effective in distributing and balancing the loads of transaction oriented grid service.","PeriodicalId":268630,"journal":{"name":"2016 2nd International Conference on Green High Performance Computing (ICGHPC)","volume":"95 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126146713","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-02-01DOI: 10.1109/ICGHPC.2016.7508066
Robert Mijakovic, Michael Firbach, M. Gerndt
Due to the complexity and diversity of new parallel architectures, automatic tuning of parallel applications has become increasingly important for achieving acceptable performance levels, as well as performance portability. The European AutoTune project developed a tuning framework that closely integrates and automates performance analysis and performance tuning. The Periscope Tuning Framework (PTF) relies on a flexible plugin mechanism and provides tuning plugins for various different tuning aspects. Each plugin provides codified expert knowledge for performance or energy efficiency tuning. PTF is able to tune serial and parallel codes for homogeneous and heterogeneous target hardware. The output of the framework is tuning recommendations that can be integrated into the production version of the code. In this paper, we present the latest development in the design of PTF aiming at (1) achieving higher portability and scalability by using the Score-P measurement infrastructure, (2) extending Score-P with tuning capabilities, (3) increasing analysis capabilities by providing new analysis strategies, and (4) increasing tuning capabilities by providing new plugins.
{"title":"An architecture for flexible auto-tuning: The Periscope Tuning Framework 2.0","authors":"Robert Mijakovic, Michael Firbach, M. Gerndt","doi":"10.1109/ICGHPC.2016.7508066","DOIUrl":"https://doi.org/10.1109/ICGHPC.2016.7508066","url":null,"abstract":"Due to the complexity and diversity of new parallel architectures, automatic tuning of parallel applications has become increasingly important for achieving acceptable performance levels, as well as performance portability. The European AutoTune project developed a tuning framework that closely integrates and automates performance analysis and performance tuning. The Periscope Tuning Framework (PTF) relies on a flexible plugin mechanism and provides tuning plugins for various different tuning aspects. Each plugin provides codified expert knowledge for performance or energy efficiency tuning. PTF is able to tune serial and parallel codes for homogeneous and heterogeneous target hardware. The output of the framework is tuning recommendations that can be integrated into the production version of the code. In this paper, we present the latest development in the design of PTF aiming at (1) achieving higher portability and scalability by using the Score-P measurement infrastructure, (2) extending Score-P with tuning capabilities, (3) increasing analysis capabilities by providing new analysis strategies, and (4) increasing tuning capabilities by providing new plugins.","PeriodicalId":268630,"journal":{"name":"2016 2nd International Conference on Green High Performance Computing (ICGHPC)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123533012","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-02-01DOI: 10.1109/ICGHPC.2016.7508072
Jayanthi G, V Uma
Remote Sensing of resource in a geographic space at regular temporal intervals has paved way for the evolution of geo-spatial information processing. Knowledge engineering of facts acquired through this technology primarily aims at qualitative results to support human in solving complex tasks that cannot be solved through quantitative relational query processing methods with Database Management Systems (DBMS). This necessitates the need for automated inference mechanism to be built over relational databases. Automated reasoning, a systematic process of formal symbolic representation to codify the acquired facts enables the system to infer new knowledge which can further update the facts. A formal representation of Event Attributed Spatial Entity (EASE) Knowledge base is proposed using the theory of Allen's Interval calculus and Randel's RCC-8. The objective of the proposed knowledge base is to formalize spatial entities in a geographic region whose temporal attributes are events occurring in an interval, at time instant and over successive intervals to qualitatively answer the event-based queries on prediction of spatial process. The significance of this formal approach is shown using query evaluation on real datasets. The working of proposed knowledge base is explained with illustrative results. Towards the end of this work, the direction for enhancement of EASE to explore its use is discussed.
{"title":"Event attributed Spatial Entity Knowledge (EASE) based Spatio-Temporal reasoning to infer geographic processes","authors":"Jayanthi G, V Uma","doi":"10.1109/ICGHPC.2016.7508072","DOIUrl":"https://doi.org/10.1109/ICGHPC.2016.7508072","url":null,"abstract":"Remote Sensing of resource in a geographic space at regular temporal intervals has paved way for the evolution of geo-spatial information processing. Knowledge engineering of facts acquired through this technology primarily aims at qualitative results to support human in solving complex tasks that cannot be solved through quantitative relational query processing methods with Database Management Systems (DBMS). This necessitates the need for automated inference mechanism to be built over relational databases. Automated reasoning, a systematic process of formal symbolic representation to codify the acquired facts enables the system to infer new knowledge which can further update the facts. A formal representation of Event Attributed Spatial Entity (EASE) Knowledge base is proposed using the theory of Allen's Interval calculus and Randel's RCC-8. The objective of the proposed knowledge base is to formalize spatial entities in a geographic region whose temporal attributes are events occurring in an interval, at time instant and over successive intervals to qualitatively answer the event-based queries on prediction of spatial process. The significance of this formal approach is shown using query evaluation on real datasets. The working of proposed knowledge base is explained with illustrative results. Towards the end of this work, the direction for enhancement of EASE to explore its use is discussed.","PeriodicalId":268630,"journal":{"name":"2016 2nd International Conference on Green High Performance Computing (ICGHPC)","volume":"2009 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130219175","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-02-01DOI: 10.1109/ICGHPC.2016.7508061
Carla Guillén, C. Navarrete, D. Brayford, Wolfram Hesse, Matthias Brehm
Energy consumption will become one of the dominant cost factors that will govern the next generation of large HPC centers. In this paper we present the Dynamic Voltage Frequency Scaling (DVFS) Plugin to automatically tune several energy related tuning objectives at a region-level of HPC applications. This plugin works with the Periscope Tuning Framework which provides an automatic tuning framework including analysis, experiment creation, and evaluation. The tuning actions are based on changes in the frequency via the DVFS. The tuning objectives include the tuning of energy consumption, total cost of ownership, energy delay product and power capping. The tuning is based on a model that relies on performance data and predicts energy consumption, time, and power consumption at different CPU frequencies.
{"title":"DVFS automatic tuning plugin for energy related tuning objectives","authors":"Carla Guillén, C. Navarrete, D. Brayford, Wolfram Hesse, Matthias Brehm","doi":"10.1109/ICGHPC.2016.7508061","DOIUrl":"https://doi.org/10.1109/ICGHPC.2016.7508061","url":null,"abstract":"Energy consumption will become one of the dominant cost factors that will govern the next generation of large HPC centers. In this paper we present the Dynamic Voltage Frequency Scaling (DVFS) Plugin to automatically tune several energy related tuning objectives at a region-level of HPC applications. This plugin works with the Periscope Tuning Framework which provides an automatic tuning framework including analysis, experiment creation, and evaluation. The tuning actions are based on changes in the frequency via the DVFS. The tuning objectives include the tuning of energy consumption, total cost of ownership, energy delay product and power capping. The tuning is based on a model that relies on performance data and predicts energy consumption, time, and power consumption at different CPU frequencies.","PeriodicalId":268630,"journal":{"name":"2016 2nd International Conference on Green High Performance Computing (ICGHPC)","volume":"170 6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116385584","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-02-01DOI: 10.1109/ICGHPC.2016.7508071
Bonaventura Del Monte, R. Prodan
In the last fifteen years, Big Data created a new generation of data analysis problems, which does not only involve the problems themselves but also the way these data are handled. Since managing terabytes of data without a proper infrastructure is unfeasible, a smart way to process these data is also necessary. A solution to this aspect deals with the creation of general algorithms that learn from observations. In this context, Deep Learning promises general, powerful, and fast machine learning algorithms, moving them one step closer to artificial intelligence. Nevertheless, fitting a deep learning model may require an huge amount of time, thus, the need of scalable infrastructures for processing large scale data sets has become ever more meaningful. In this paper, we present a framework for training these deep neural networks using heterogeneous computing resources of either grid or cloud infrastructures. The framework lets the end-users define the deep architecture they need for processing their own Big Data, while dealing with the execution of the learning algorithms on a distributed set of nodes (through Apache Flink) as well as with offloading the computation on multiple Graphics Processing Units.
{"title":"A scalable GPU-enabled framework for training deep neural networks","authors":"Bonaventura Del Monte, R. Prodan","doi":"10.1109/ICGHPC.2016.7508071","DOIUrl":"https://doi.org/10.1109/ICGHPC.2016.7508071","url":null,"abstract":"In the last fifteen years, Big Data created a new generation of data analysis problems, which does not only involve the problems themselves but also the way these data are handled. Since managing terabytes of data without a proper infrastructure is unfeasible, a smart way to process these data is also necessary. A solution to this aspect deals with the creation of general algorithms that learn from observations. In this context, Deep Learning promises general, powerful, and fast machine learning algorithms, moving them one step closer to artificial intelligence. Nevertheless, fitting a deep learning model may require an huge amount of time, thus, the need of scalable infrastructures for processing large scale data sets has become ever more meaningful. In this paper, we present a framework for training these deep neural networks using heterogeneous computing resources of either grid or cloud infrastructures. The framework lets the end-users define the deep architecture they need for processing their own Big Data, while dealing with the execution of the learning algorithms on a distributed set of nodes (through Apache Flink) as well as with offloading the computation on multiple Graphics Processing Units.","PeriodicalId":268630,"journal":{"name":"2016 2nd International Conference on Green High Performance Computing (ICGHPC)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134443827","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-02-01DOI: 10.1109/ICGHPC.2016.7508055
Shajulin Benedict
2016 2nd International Conference on Green High Performance Computing (26–27 February 2016) aimed at bringing together specialists and researchers who work with energy related issues that exist in HPC domains, such as, Grid, Cloud, or massively parallel domains.
{"title":"Message from the organizing chair","authors":"Shajulin Benedict","doi":"10.1109/ICGHPC.2016.7508055","DOIUrl":"https://doi.org/10.1109/ICGHPC.2016.7508055","url":null,"abstract":"2016 2nd International Conference on Green High Performance Computing (26–27 February 2016) aimed at bringing together specialists and researchers who work with energy related issues that exist in HPC domains, such as, Grid, Cloud, or massively parallel domains.","PeriodicalId":268630,"journal":{"name":"2016 2nd International Conference on Green High Performance Computing (ICGHPC)","volume":"07 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127645797","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-02-01DOI: 10.1109/ICGHPC.2016.7508068
N. Brintha, J. Jappes, S. Benedict
Resource scheduling and management is an important problem in Cloud Manufacturing. The concept of optimization for scheduling jobs is an important issue to be considered in scheduling of different resources among heterogeneous users. The resources are placed across diversified locations in cloud and the major task is to distribute the resources effectively such that the makespan and completion time is reduced. In this paper, a Modified Ant Colony based optimization technique is proposed to optimize the resources through distributed computation. ACO is used to choose one among the different alternative rules to determine the processing order of each resource. Rather than having a larger search space, this approach reduces the search space and gives better solution. This reduces the delay in allocating resources to the user by providing an adaptive and global search technique. This approach reduces the total completion time of jobs and also takes in to account the migration time of the process. A series of experiments were conducted and the results of the experiment are compared with other heuristic algorithms like PSO. The results have shown that this approach can produce optimal solutions quickly by reducing delays.
{"title":"A Modified Ant Colony based optimization for managing Cloud resources in manufacturing sector","authors":"N. Brintha, J. Jappes, S. Benedict","doi":"10.1109/ICGHPC.2016.7508068","DOIUrl":"https://doi.org/10.1109/ICGHPC.2016.7508068","url":null,"abstract":"Resource scheduling and management is an important problem in Cloud Manufacturing. The concept of optimization for scheduling jobs is an important issue to be considered in scheduling of different resources among heterogeneous users. The resources are placed across diversified locations in cloud and the major task is to distribute the resources effectively such that the makespan and completion time is reduced. In this paper, a Modified Ant Colony based optimization technique is proposed to optimize the resources through distributed computation. ACO is used to choose one among the different alternative rules to determine the processing order of each resource. Rather than having a larger search space, this approach reduces the search space and gives better solution. This reduces the delay in allocating resources to the user by providing an adaptive and global search technique. This approach reduces the total completion time of jobs and also takes in to account the migration time of the process. A series of experiments were conducted and the results of the experiment are compared with other heuristic algorithms like PSO. The results have shown that this approach can produce optimal solutions quickly by reducing delays.","PeriodicalId":268630,"journal":{"name":"2016 2nd International Conference on Green High Performance Computing (ICGHPC)","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114696480","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-02-01DOI: 10.1109/ICGHPC.2016.7508073
S. Nalini, A. Valarmathi
Wireless sensor networks can be deployed in a site where the traditional networking infrastructure is practically impossible. Energy, memory, computation resources and transmission range are the limitations of Sensor Network. In this network, the sensor nodes are grouped together to form clusters. Cluster performs data aggregation and limits data transmissions hence data are disseminated to the cluster head and further propagated to the base station. Storage constraint is one of the challenging factors in the sensor network. Hence, this paper focuses on reducing the rule set by incorporating an association rule along with fuzzy logic for predicting the cluster head. Support and confidence are evaluated for the rule set and reduced final rule sets are generated based on the calculated confidence level with a certain threshold. Simulation results showed that a minimum rule set bin can predict the Cluster head, which has high potential in the group. The Node occupies less memory space for the reduced rule set and the computational complexities are reduced as a result it also enhances the network lifetime.
{"title":"Fuzzy association rule based Cluster head selection in wireless Sensor Network","authors":"S. Nalini, A. Valarmathi","doi":"10.1109/ICGHPC.2016.7508073","DOIUrl":"https://doi.org/10.1109/ICGHPC.2016.7508073","url":null,"abstract":"Wireless sensor networks can be deployed in a site where the traditional networking infrastructure is practically impossible. Energy, memory, computation resources and transmission range are the limitations of Sensor Network. In this network, the sensor nodes are grouped together to form clusters. Cluster performs data aggregation and limits data transmissions hence data are disseminated to the cluster head and further propagated to the base station. Storage constraint is one of the challenging factors in the sensor network. Hence, this paper focuses on reducing the rule set by incorporating an association rule along with fuzzy logic for predicting the cluster head. Support and confidence are evaluated for the rule set and reduced final rule sets are generated based on the calculated confidence level with a certain threshold. Simulation results showed that a minimum rule set bin can predict the Cluster head, which has high potential in the group. The Node occupies less memory space for the reduced rule set and the computational complexities are reduced as a result it also enhances the network lifetime.","PeriodicalId":268630,"journal":{"name":"2016 2nd International Conference on Green High Performance Computing (ICGHPC)","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116453257","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}