Most applications are consisted of several activities that are fulfilled by different processes. And even processes are included different child processes named light processes or threads. The basic idea of dividing the whole activities to processes is followed by the reusability and sharing ideas. Therefore, applications need an IPC mechanism to establish the communication between the processes. Inter process communication that is known as IPC is a collection of mechanisms that meet the communication requirements between processes. System V defines standard for IPC mechanism named SVIPC. Different operating systems implement SVIPC standard in different manner. Therefore programs that are using the IPC mechanism have different structure in other operating systems. On the other hand reproducing program for various operating systems is a time consuming activity. Porting is a solution to writing programs with the least changes to port them on different operating systems. In this survey we present a brief introduction of various IPC mechanisms in the two operating systems and describe porting Windowspsila programs to Linux by mapping the IPC primitives as a solution. We present the porting as a solution to portable IPC programming. While the program is written with windows IPC mechanism can use our wrapper to be able to run in Linux operating system.
{"title":"Portable Inter Process Communication Programming","authors":"Morteza Kashyian, Seyedeh Leili, Mirtaheri Ehsan, Mousavi Khaneghah","doi":"10.1109/ADVCOMP.2008.38","DOIUrl":"https://doi.org/10.1109/ADVCOMP.2008.38","url":null,"abstract":"Most applications are consisted of several activities that are fulfilled by different processes. And even processes are included different child processes named light processes or threads. The basic idea of dividing the whole activities to processes is followed by the reusability and sharing ideas. Therefore, applications need an IPC mechanism to establish the communication between the processes. Inter process communication that is known as IPC is a collection of mechanisms that meet the communication requirements between processes. System V defines standard for IPC mechanism named SVIPC. Different operating systems implement SVIPC standard in different manner. Therefore programs that are using the IPC mechanism have different structure in other operating systems. On the other hand reproducing program for various operating systems is a time consuming activity. Porting is a solution to writing programs with the least changes to port them on different operating systems. In this survey we present a brief introduction of various IPC mechanisms in the two operating systems and describe porting Windowspsila programs to Linux by mapping the IPC primitives as a solution. We present the porting as a solution to portable IPC programming. While the program is written with windows IPC mechanism can use our wrapper to be able to run in Linux operating system.","PeriodicalId":269090,"journal":{"name":"2008 The Second International Conference on Advanced Engineering Computing and Applications in Sciences","volume":"99 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124860220","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Self-scheduling algorithms can achieve a good balance between workload and communication overhead in computational systems. In particular, quadratic self-scheduling (QSS) and exponential self-scheduling (ESS) are flexible enough to adapt to distributed systems. Thus, they are of interest for application in Internet-based grids of computers. However, these algorithms depend on several parameters, which have to be optimized for the working environment. To tackle this problem, we present here a heuristic approach, based in simulated annealing (SA), to optimize all the parameters of QSS and ESS. To such a goal, the computational grid environment is simulated. We find that the optimal SA results permit to reduce the overall computing time of a set of tasks up to a 12%, with respect to results obtained with previous values of the parameters experimentally determined. Moreover, the time to obtain the SA optimized parameters by simulation is negligible compared with that needed using experimental measures. In addition, we find the results to be fairly insensitive to the size of the chunks (sets of tasks sent to a processor). Finally, the results show the SA scheduling approach to be very efficient, since a simple linear dependence of the overall computing time with the number of tasks is found.
{"title":"A Heuristic Approach to Task Scheduling in Internet-Based Grids of Computers","authors":"Javier Díaz, S. Reyes, C. Muñoz-Caro, A. Niño","doi":"10.1109/ADVCOMP.2008.9","DOIUrl":"https://doi.org/10.1109/ADVCOMP.2008.9","url":null,"abstract":"Self-scheduling algorithms can achieve a good balance between workload and communication overhead in computational systems. In particular, quadratic self-scheduling (QSS) and exponential self-scheduling (ESS) are flexible enough to adapt to distributed systems. Thus, they are of interest for application in Internet-based grids of computers. However, these algorithms depend on several parameters, which have to be optimized for the working environment. To tackle this problem, we present here a heuristic approach, based in simulated annealing (SA), to optimize all the parameters of QSS and ESS. To such a goal, the computational grid environment is simulated. We find that the optimal SA results permit to reduce the overall computing time of a set of tasks up to a 12%, with respect to results obtained with previous values of the parameters experimentally determined. Moreover, the time to obtain the SA optimized parameters by simulation is negligible compared with that needed using experimental measures. In addition, we find the results to be fairly insensitive to the size of the chunks (sets of tasks sent to a processor). Finally, the results show the SA scheduling approach to be very efficient, since a simple linear dependence of the overall computing time with the number of tasks is found.","PeriodicalId":269090,"journal":{"name":"2008 The Second International Conference on Advanced Engineering Computing and Applications in Sciences","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121803947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
F. Andriamanampisoa, J. Jessel, S. Rakotondraompiana
We have already led a survey on a new approach in nonrigid registration using elastic model, finite element method and mutual information and we have had good results. This approach is a multimodal and a fully automatic registration. Besides, one can use it for a registration with images in low resolutions, such as the positron emission tomography (PET) and single photon emission computed tomography (SPECT). However one can note that registration requires a too much important calculation time and high occupancy spaces especially with 3D images. Thus, the parallelization is required. So, the aim is to parallelize this approach for non-rigid registration. Therefore, this paper presents the theory of the approach and deals the transformation of the registration algorithm in a parallel environment. We have worked with single program multiple data model on distributed memory (SPMD-DM) architecture.
{"title":"A Non-rigid Registration Using Elastic Model, Finite Element Method and Mutual Information in Parallel Environment","authors":"F. Andriamanampisoa, J. Jessel, S. Rakotondraompiana","doi":"10.1109/ADVCOMP.2008.26","DOIUrl":"https://doi.org/10.1109/ADVCOMP.2008.26","url":null,"abstract":"We have already led a survey on a new approach in nonrigid registration using elastic model, finite element method and mutual information and we have had good results. This approach is a multimodal and a fully automatic registration. Besides, one can use it for a registration with images in low resolutions, such as the positron emission tomography (PET) and single photon emission computed tomography (SPECT). However one can note that registration requires a too much important calculation time and high occupancy spaces especially with 3D images. Thus, the parallelization is required. So, the aim is to parallelize this approach for non-rigid registration. Therefore, this paper presents the theory of the approach and deals the transformation of the registration algorithm in a parallel environment. We have worked with single program multiple data model on distributed memory (SPMD-DM) architecture.","PeriodicalId":269090,"journal":{"name":"2008 The Second International Conference on Advanced Engineering Computing and Applications in Sciences","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126660713","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In recent years, distributed virtual environments (DVEs) have become one of the major network applications, mainly due to the enormous popularity of multiplayer online games in the entertainment industry. Although the workload generated by avatars in a DVE system has already been characterized, the actual network traffic requirements of multiplayer online games are usually limited (hidden) by the available network bandwidth. In this paper, we propose the measurement of the network traffic requirements of the most popular MOGs by monitoring the network traffic generated by different game tournaments in a LAN party. The network infrastructure was explicitely designed and implemented for that event by a network service provider, achieving a sustained bandwidth of 100 Mbps for each network interface. Therefore, the potential bandwidth bottleneck was moved from the network to another element of the system or application. The results show that the aggregated bandwidth required by these applications is not higher than 1600 Kbps. Also, the results show identical variations in the network traffic sent to some of the clients by the game server. These results can be used as a basis for an efficient design of MOGs infrastructure.
{"title":"Analyzing the Network Traffic Requirements of Multiplayer Online Games","authors":"E. Asensio, J.M. Ordua, P. Morillo","doi":"10.1109/ADVCOMP.2008.15","DOIUrl":"https://doi.org/10.1109/ADVCOMP.2008.15","url":null,"abstract":"In recent years, distributed virtual environments (DVEs) have become one of the major network applications, mainly due to the enormous popularity of multiplayer online games in the entertainment industry. Although the workload generated by avatars in a DVE system has already been characterized, the actual network traffic requirements of multiplayer online games are usually limited (hidden) by the available network bandwidth. In this paper, we propose the measurement of the network traffic requirements of the most popular MOGs by monitoring the network traffic generated by different game tournaments in a LAN party. The network infrastructure was explicitely designed and implemented for that event by a network service provider, achieving a sustained bandwidth of 100 Mbps for each network interface. Therefore, the potential bandwidth bottleneck was moved from the network to another element of the system or application. The results show that the aggregated bandwidth required by these applications is not higher than 1600 Kbps. Also, the results show identical variations in the network traffic sent to some of the clients by the game server. These results can be used as a basis for an efficient design of MOGs infrastructure.","PeriodicalId":269090,"journal":{"name":"2008 The Second International Conference on Advanced Engineering Computing and Applications in Sciences","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134233928","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Calanducci, F.P. Castrillo, R. R. Pollán, M.R. del Solar
In this work we present gLibrary/DRI, a Grid-based multirepository environment designed to ease the deployment and hosting of repositories on top of Grid eInfrastructures. Grid environments present several facilities that are worthy for digital repositories. The most important ones are strong security contexts, the data federation, the information sharing and the availability of large computing and storage capacity. The gLibrary/DRI platform offers to arbitrary repositories default implementations for the storage system and node navigation, algorithm launching mechanisms and easy integration with viewer tools for representing the content of the repositories. In particular, we present two examples (1) for hosting a Mammograms archive having several features that help clinicians to make diagnosis through easy inspection of the repository contents (2) for driving and hosting scientific production for non linear dynamics problems formulated as phase spaces.
{"title":"Enabling Digital Repositories on the Grid","authors":"A. Calanducci, F.P. Castrillo, R. R. Pollán, M.R. del Solar","doi":"10.1109/ADVCOMP.2008.41","DOIUrl":"https://doi.org/10.1109/ADVCOMP.2008.41","url":null,"abstract":"In this work we present gLibrary/DRI, a Grid-based multirepository environment designed to ease the deployment and hosting of repositories on top of Grid eInfrastructures. Grid environments present several facilities that are worthy for digital repositories. The most important ones are strong security contexts, the data federation, the information sharing and the availability of large computing and storage capacity. The gLibrary/DRI platform offers to arbitrary repositories default implementations for the storage system and node navigation, algorithm launching mechanisms and easy integration with viewer tools for representing the content of the repositories. In particular, we present two examples (1) for hosting a Mammograms archive having several features that help clinicians to make diagnosis through easy inspection of the repository contents (2) for driving and hosting scientific production for non linear dynamics problems formulated as phase spaces.","PeriodicalId":269090,"journal":{"name":"2008 The Second International Conference on Advanced Engineering Computing and Applications in Sciences","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114515402","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Software organizations are putting efforts to improve the accuracy of the project cost estimation. This in turn helps them to allocate resources. Software cost estimation has been an area of key interest in software engineering community. Many estimation models divided among various categories have been proposed over a period of time. Function Point (FP) is one of the useful software cost estimation methodology that was first proposed twenty-five years ago using the project repository that contained information about various aspects of software project. In the last twenty five years software development productivity has grown rapidly but the complexity weight metrics values assigned to count standard FP still remain same. This fact raises critical questions about the validity of the complexity weight values and accuracy of the estimation process. The objective of this work is to present a genetic algorithm based approach to calibrate the complexity weight metrics of FP using the project repository of International Software Benchmarking Standards Group (ISBSG) dataset. The contribution of this work shows that information reuse and integration of past projectpsilas function-point structural elements improves the accuracy of software estimation process.
{"title":"Integrating Function Point Project Information for Improving the Accuracy of Effort Estimation","authors":"F. Ahmed, S. Bouktif, A. Serhani, I. Khalil","doi":"10.1109/ADVCOMP.2008.42","DOIUrl":"https://doi.org/10.1109/ADVCOMP.2008.42","url":null,"abstract":"Software organizations are putting efforts to improve the accuracy of the project cost estimation. This in turn helps them to allocate resources. Software cost estimation has been an area of key interest in software engineering community. Many estimation models divided among various categories have been proposed over a period of time. Function Point (FP) is one of the useful software cost estimation methodology that was first proposed twenty-five years ago using the project repository that contained information about various aspects of software project. In the last twenty five years software development productivity has grown rapidly but the complexity weight metrics values assigned to count standard FP still remain same. This fact raises critical questions about the validity of the complexity weight values and accuracy of the estimation process. The objective of this work is to present a genetic algorithm based approach to calibrate the complexity weight metrics of FP using the project repository of International Software Benchmarking Standards Group (ISBSG) dataset. The contribution of this work shows that information reuse and integration of past projectpsilas function-point structural elements improves the accuracy of software estimation process.","PeriodicalId":269090,"journal":{"name":"2008 The Second International Conference on Advanced Engineering Computing and Applications in Sciences","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125555979","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
K. Fakhfakh, T. Chaari, S. Tazi, K. Drira, M. Jmaiel
Specifying clear quality of service (QoS) agreements between service providers and consumers is particularly important for the successful deployment of service-oriented architectures. The related challenges include correctly elaborating and monitoring QoS contracts (SLA: service level agreement) to detect and handle their violations. In this paper, first, we study and analyze existing SLA-related models. Then, we elaborate a complete, generic and semantically richer ontology-based model of SLA. We used the Semantic Web Rule Language (SWRL) to express SLA obligations in our model. This language facilitates the SLA monitoring process and the eventual action triggering in case of violations. We used this model to automatically generate semantic-enabled QoS obligations monitors. We have also developed a prototype to validate our model and our monitoring approach. Finally, we believe that this work is a step ahead to the total automation of the SLA management process.
{"title":"A Comprehensive Ontology-Based Approach for SLA Obligations Monitoring","authors":"K. Fakhfakh, T. Chaari, S. Tazi, K. Drira, M. Jmaiel","doi":"10.1109/ADVCOMP.2008.21","DOIUrl":"https://doi.org/10.1109/ADVCOMP.2008.21","url":null,"abstract":"Specifying clear quality of service (QoS) agreements between service providers and consumers is particularly important for the successful deployment of service-oriented architectures. The related challenges include correctly elaborating and monitoring QoS contracts (SLA: service level agreement) to detect and handle their violations. In this paper, first, we study and analyze existing SLA-related models. Then, we elaborate a complete, generic and semantically richer ontology-based model of SLA. We used the Semantic Web Rule Language (SWRL) to express SLA obligations in our model. This language facilitates the SLA monitoring process and the eventual action triggering in case of violations. We used this model to automatically generate semantic-enabled QoS obligations monitors. We have also developed a prototype to validate our model and our monitoring approach. Finally, we believe that this work is a step ahead to the total automation of the SLA management process.","PeriodicalId":269090,"journal":{"name":"2008 The Second International Conference on Advanced Engineering Computing and Applications in Sciences","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128595001","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
E. Acción, M. Delfino, X. Espinal, J. Flix, C. Neißner, K. Neuffer, F. Martínez, G. Merino, A. Sáinz
We explore the status, challenges and opportunities in providing a resource center focused on large scale data management on the EGEE grid, in the context of the emerging Spanish e-science network. The application to bulky and relatively flat datasets in high energy physics is mature and in daily production. Transferring this methodology to other application fields is still ongoing, posing challenges to the grid infrastructure, the resource centers and the grid middleware and even to the human aspects of collaborative science.
{"title":"Large Scale Data Management on Grids","authors":"E. Acción, M. Delfino, X. Espinal, J. Flix, C. Neißner, K. Neuffer, F. Martínez, G. Merino, A. Sáinz","doi":"10.1109/ADVCOMP.2008.33","DOIUrl":"https://doi.org/10.1109/ADVCOMP.2008.33","url":null,"abstract":"We explore the status, challenges and opportunities in providing a resource center focused on large scale data management on the EGEE grid, in the context of the emerging Spanish e-science network. The application to bulky and relatively flat datasets in high energy physics is mature and in daily production. Transferring this methodology to other application fields is still ongoing, posing challenges to the grid infrastructure, the resource centers and the grid middleware and even to the human aspects of collaborative science.","PeriodicalId":269090,"journal":{"name":"2008 The Second International Conference on Advanced Engineering Computing and Applications in Sciences","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128845271","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The status of the use of grid computing techniques for fusion simulation is given. It is also given a description of the main applications that are running in the grid and the difficulties that appeared to port them to the grid. The coming developments of grid computing for fusion are shown and the future of this work line is discussed.
{"title":"Grid Computing Devoted to Fusion Applications","authors":"F. Castejón","doi":"10.1109/ADVCOMP.2008.40","DOIUrl":"https://doi.org/10.1109/ADVCOMP.2008.40","url":null,"abstract":"The status of the use of grid computing techniques for fusion simulation is given. It is also given a description of the main applications that are running in the grid and the difficulties that appeared to port them to the grid. The coming developments of grid computing for fusion are shown and the future of this work line is discussed.","PeriodicalId":269090,"journal":{"name":"2008 The Second International Conference on Advanced Engineering Computing and Applications in Sciences","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126269026","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This work presents an optimization approach for the broadcast operation in MANETs based on the DFCN protocol. Such approach involves a multi-objective optimization that has been tackled through the cooperation of a team of evolutionary algorithms. The proposed optimization model is a hybrid algorithm that combines a parallel island-based scheme with a hyperheuristic approach. The model includes an adaptive property to dynamically change the algorithms being executed on each island. More computational resources are granted to the most suitable algorithms.The computational results obtained for a highway MANETs instance demonstrate the validity of the proposed model.
{"title":"Optimizing the Configuration of a Broadcast Protocol through Parallel Cooperation of Multi-objective Evolutionary Algorithms","authors":"C. León, G. Miranda, C. Segura","doi":"10.1109/ADVCOMP.2008.16","DOIUrl":"https://doi.org/10.1109/ADVCOMP.2008.16","url":null,"abstract":"This work presents an optimization approach for the broadcast operation in MANETs based on the DFCN protocol. Such approach involves a multi-objective optimization that has been tackled through the cooperation of a team of evolutionary algorithms. The proposed optimization model is a hybrid algorithm that combines a parallel island-based scheme with a hyperheuristic approach. The model includes an adaptive property to dynamically change the algorithms being executed on each island. More computational resources are granted to the most suitable algorithms.The computational results obtained for a highway MANETs instance demonstrate the validity of the proposed model.","PeriodicalId":269090,"journal":{"name":"2008 The Second International Conference on Advanced Engineering Computing and Applications in Sciences","volume":"59 Pt A 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129303347","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}