We developed two tools previously called ISG and DWL and ISG is for generating information systems and DWL is for generating web systems. We have used ISG and DWL to develop a customized web-based system for the 7th Ubiquitous-Home Conference UHC2013 and other web-based systems. The advantage of these web-based systems is that it uses object serialization mechanism to fill objects with data which saves CPU time. We used building files of ISG to build the file system of a web-based system and each attribute of an object to be specified for translating these settings to Java. We wrote Input Output programs to read data and write data and these objects with data entry thus created can be stored and retrieved efficiently. Our web-based systems avoid running cooperating processes that share data and resulting in inconsistencies in the shared data. Company produce new products frequently and the web pages of new products need to be updated shortly and using ISG can solve this problem.
{"title":"Amazing of Using ISG on Implementing a Web-Based System","authors":"Ling-Hua Chang, Sanjiv Behl, Tung-Ho Shieh","doi":"10.1109/PDCAT.2013.14","DOIUrl":"https://doi.org/10.1109/PDCAT.2013.14","url":null,"abstract":"We developed two tools previously called ISG and DWL and ISG is for generating information systems and DWL is for generating web systems. We have used ISG and DWL to develop a customized web-based system for the 7th Ubiquitous-Home Conference UHC2013 and other web-based systems. The advantage of these web-based systems is that it uses object serialization mechanism to fill objects with data which saves CPU time. We used building files of ISG to build the file system of a web-based system and each attribute of an object to be specified for translating these settings to Java. We wrote Input Output programs to read data and write data and these objects with data entry thus created can be stored and retrieved efficiently. Our web-based systems avoid running cooperating processes that share data and resulting in inconsistencies in the shared data. Company produce new products frequently and the web pages of new products need to be updated shortly and using ISG can solve this problem.","PeriodicalId":187974,"journal":{"name":"2013 International Conference on Parallel and Distributed Computing, Applications and Technologies","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129750821","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A novel ad-hoc approach named Distance Score with Weights Selection (DSWS) is proposed to select suitable slots for loading feeders. A genetic algorithm is proposed to search the optimal solutions of arranging feeders to selected slots and sequencing the placement positions. The numerical results show the effectiveness and efficiency of the proposed approach and its advantages over existing benchmark algorithms.
{"title":"An Ad-Hoc Method with Genetic Algorithm for Printed Circuit Board Assembly Optimization on the Sequential Pick-and-Place Machine","authors":"Gang Peng, Kehan Zeng","doi":"10.1109/PDCAT.2013.27","DOIUrl":"https://doi.org/10.1109/PDCAT.2013.27","url":null,"abstract":"A novel ad-hoc approach named Distance Score with Weights Selection (DSWS) is proposed to select suitable slots for loading feeders. A genetic algorithm is proposed to search the optimal solutions of arranging feeders to selected slots and sequencing the placement positions. The numerical results show the effectiveness and efficiency of the proposed approach and its advantages over existing benchmark algorithms.","PeriodicalId":187974,"journal":{"name":"2013 International Conference on Parallel and Distributed Computing, Applications and Technologies","volume":"96 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124692658","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Most recommender systems proposed for service computing do not address the attacks on service rating systems. This paper proposed a service rating system that is capable of countering malicious manipulations. The system predicts how customers rate services based on the ratings given by the similar users of the customers and the trustworthy experienced users. The proposed scheme uses the collaborative filtering technique and a game theory-based approach in choosing users for rating prediction. Compared with existing schemes, the proposed scheme is more effective in countering malicious manipulations.
{"title":"A Game Theory-Based Approach to Service Rating","authors":"Xinfeng Ye, J. Zheng, B. Khoussainov","doi":"10.1109/PDCAT.2013.33","DOIUrl":"https://doi.org/10.1109/PDCAT.2013.33","url":null,"abstract":"Most recommender systems proposed for service computing do not address the attacks on service rating systems. This paper proposed a service rating system that is capable of countering malicious manipulations. The system predicts how customers rate services based on the ratings given by the similar users of the customers and the trustworthy experienced users. The proposed scheme uses the collaborative filtering technique and a game theory-based approach in choosing users for rating prediction. Compared with existing schemes, the proposed scheme is more effective in countering malicious manipulations.","PeriodicalId":187974,"journal":{"name":"2013 International Conference on Parallel and Distributed Computing, Applications and Technologies","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116463444","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Deaf people face serious difficulties to access information. The fact is that they communicate naturally through sign languages, whereas, to most of them, the spoken languages are considered only a second language. When designed, Information and Communication Technologies (ICTs) rarely take into account the barriers that deaf people face. It is common that application developers do not hire sign languages interpreters to provide an accessible version of their app/site to deaf people. Currently, there are tools for automatic translation from sign languages to spoken languages, but, unfortunately, they are not available to third parties. To reduce these problems, it would be interesting if any automatic translation tool/service could be publicly available. This is the main goal of this work: use a preconceived machine translation from Portuguese Language to Brazilian Sign Language (LIBRAS) (named VLIBRAS) and provide Deaf Accessibility as a Service (DAaaS) publicly. The idea is to abstract inherent problems in the translation process between the Portuguese Language and LIBRAS by providing a service that performs the automatic translation of multimedia content to LIBRAS. VLIBRAS was primarily deployed as a centralized system, and this conventional architecture has some disadvantages when compared to distributed architectures. In this paper we propose two distributed architectures in order to provide a scalable service and achieve fault tolerance. For conception and availability of this service, it will be used the cloud computing paradigm to incorporate the following additional benefits: transparency, high availability, and efficient use of resources.
{"title":"A Scalable and Fault Tolerant Architecture to Provide Deaf Accessibility as a Service","authors":"E. Falcão, T. Araújo, Alexandre Duarte","doi":"10.1109/PDCAT.2013.62","DOIUrl":"https://doi.org/10.1109/PDCAT.2013.62","url":null,"abstract":"Deaf people face serious difficulties to access information. The fact is that they communicate naturally through sign languages, whereas, to most of them, the spoken languages are considered only a second language. When designed, Information and Communication Technologies (ICTs) rarely take into account the barriers that deaf people face. It is common that application developers do not hire sign languages interpreters to provide an accessible version of their app/site to deaf people. Currently, there are tools for automatic translation from sign languages to spoken languages, but, unfortunately, they are not available to third parties. To reduce these problems, it would be interesting if any automatic translation tool/service could be publicly available. This is the main goal of this work: use a preconceived machine translation from Portuguese Language to Brazilian Sign Language (LIBRAS) (named VLIBRAS) and provide Deaf Accessibility as a Service (DAaaS) publicly. The idea is to abstract inherent problems in the translation process between the Portuguese Language and LIBRAS by providing a service that performs the automatic translation of multimedia content to LIBRAS. VLIBRAS was primarily deployed as a centralized system, and this conventional architecture has some disadvantages when compared to distributed architectures. In this paper we propose two distributed architectures in order to provide a scalable service and achieve fault tolerance. For conception and availability of this service, it will be used the cloud computing paradigm to incorporate the following additional benefits: transparency, high availability, and efficient use of resources.","PeriodicalId":187974,"journal":{"name":"2013 International Conference on Parallel and Distributed Computing, Applications and Technologies","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128479260","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
C. R. Valêncio, Matheus Henrique Marioto, G. F. D. Zafalon, J. M. Machado, J. C. Momente
Nowadays large corporations require integrated data from diverse sources, leading to the use of data warehouse architectures for this purpose. To bypass problems related to the use of computational resources to process large volumes of data, an ETL (Extract, Transform and Load) technique with zero latency can be used, that works by constantly processing small data loads. Among the extraction techniques of the zero latency ETL are the use of logs, triggers, materialized views and timestamps. This paper proposes a structure capable of performing this task by means of triggers and a tool developed for the automatic generation of the SQL (Structured Query Language) code to create these trigger, besides showing its performance and comparing it to other techniques. Said method is relevant for the extraction of portions of selected information as it permits to combine conventional and real time ETL techniques.
{"title":"Real Time Delta Extraction Based on Triggers to Support Data Warehousing","authors":"C. R. Valêncio, Matheus Henrique Marioto, G. F. D. Zafalon, J. M. Machado, J. C. Momente","doi":"10.1109/PDCAT.2013.52","DOIUrl":"https://doi.org/10.1109/PDCAT.2013.52","url":null,"abstract":"Nowadays large corporations require integrated data from diverse sources, leading to the use of data warehouse architectures for this purpose. To bypass problems related to the use of computational resources to process large volumes of data, an ETL (Extract, Transform and Load) technique with zero latency can be used, that works by constantly processing small data loads. Among the extraction techniques of the zero latency ETL are the use of logs, triggers, materialized views and timestamps. This paper proposes a structure capable of performing this task by means of triggers and a tool developed for the automatic generation of the SQL (Structured Query Language) code to create these trigger, besides showing its performance and comparing it to other techniques. Said method is relevant for the extraction of portions of selected information as it permits to combine conventional and real time ETL techniques.","PeriodicalId":187974,"journal":{"name":"2013 International Conference on Parallel and Distributed Computing, Applications and Technologies","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125132618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper concerns the impact of various antenna models on the network connectivity of wireless ad hoc networks. Existing antenna models have their pros and cons in the accuracy reflecting realistic antennas and the computational complexity. We therefore propose a new directional antenna model called Approx-real to balance the accuracy against the complexity. We then run extensive simulations to compare the existing models and the Approx-real model in terms of the network connectivity. The study results show that the Approx-real model can better approximate the best accurate existing antenna models than other simplified antenna models, while introducing no high computational overheads.
{"title":"Connectivity of Wireless Ad Hoc Networks: Impacts of Antenna Models","authors":"Qiu Wang, Hongning Dai, Qinglin Zhao","doi":"10.1109/PDCAT.2013.53","DOIUrl":"https://doi.org/10.1109/PDCAT.2013.53","url":null,"abstract":"This paper concerns the impact of various antenna models on the network connectivity of wireless ad hoc networks. Existing antenna models have their pros and cons in the accuracy reflecting realistic antennas and the computational complexity. We therefore propose a new directional antenna model called Approx-real to balance the accuracy against the complexity. We then run extensive simulations to compare the existing models and the Approx-real model in terms of the network connectivity. The study results show that the Approx-real model can better approximate the best accurate existing antenna models than other simplified antenna models, while introducing no high computational overheads.","PeriodicalId":187974,"journal":{"name":"2013 International Conference on Parallel and Distributed Computing, Applications and Technologies","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133080635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
George Karakostas, Raminder Kharaud, Anastasios Viglas
We consider a type of game theoretic dynamics in a network model where all nodes act selfishly and will forward packets only if it is to their benefit. The model we present assumes that each node receives utility from successfully sending its own flow to its destination(s) and from receiving flow, while it pays a cost (e.g., battery energy) for its transmissions. Each node has to decide whether to relay flow as an intermediate node from other sources, as relaying incurs only costs. To induce nodes into acting as intermediaries, the model implements a reputation-based mechanism which punishes non-cooperative nodes by cutting off links to them, a decision that is made in a very local fashion. In our setting, the nodes know only the state of the network in their local neighborhood, and can only decide on the amount of the flow on their outgoing edges, unlike the previously considered models where users have full knowledge of the network and can also decide the routing of flow originating from them. Given the opportunistic nature of the nodes and their very limited knowledge of the network, our simulations show the rather surprising fact that a non-negligible amount of non-trivial flow (flow over at least two hops) is successfully transmitted.
{"title":"Dynamics of a Localized Reputation-Based Network Protocol","authors":"George Karakostas, Raminder Kharaud, Anastasios Viglas","doi":"10.1109/PDCAT.2013.30","DOIUrl":"https://doi.org/10.1109/PDCAT.2013.30","url":null,"abstract":"We consider a type of game theoretic dynamics in a network model where all nodes act selfishly and will forward packets only if it is to their benefit. The model we present assumes that each node receives utility from successfully sending its own flow to its destination(s) and from receiving flow, while it pays a cost (e.g., battery energy) for its transmissions. Each node has to decide whether to relay flow as an intermediate node from other sources, as relaying incurs only costs. To induce nodes into acting as intermediaries, the model implements a reputation-based mechanism which punishes non-cooperative nodes by cutting off links to them, a decision that is made in a very local fashion. In our setting, the nodes know only the state of the network in their local neighborhood, and can only decide on the amount of the flow on their outgoing edges, unlike the previously considered models where users have full knowledge of the network and can also decide the routing of flow originating from them. Given the opportunistic nature of the nodes and their very limited knowledge of the network, our simulations show the rather surprising fact that a non-negligible amount of non-trivial flow (flow over at least two hops) is successfully transmitted.","PeriodicalId":187974,"journal":{"name":"2013 International Conference on Parallel and Distributed Computing, Applications and Technologies","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133184768","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The main objective of image enhancement is to improve the visual quality of digital images that are captured under extremely low or non-uniform lighting conditions. We present an adaptive image enhancement algorithm based on Zone System. This study reveals hidden image details and increases the contrast of an image with low dynamic range. It is comprised two processes: adaptive luminance enhancement and adaptive contrast enhancement. The adaptive luminance enhancement algorithm is a global intensity transform function based on Zone System information. This process not only increases the luminance of darker pixels but also compresses the dynamic range of the image. The adaptive contrast enhancement adjusts the intensity of each pixel based on the discontinuities of the local luminance. It also improves the contrast of local region and reveals the details of image clearly. The proposed algorithm has good performance on enhancing contrast, preserving more detail of characteristics and sharpening edges of objects in experimental results. The performance with our proposed was better evaluation and comparison than other algorithms in the subjective and objective evaluation.
{"title":"An Adaptive Image Enhancement Algorithm Based on Zone System","authors":"S. Tai, Chia-Ying Chang, Han-Ru Fan","doi":"10.1109/PDCAT.2013.57","DOIUrl":"https://doi.org/10.1109/PDCAT.2013.57","url":null,"abstract":"The main objective of image enhancement is to improve the visual quality of digital images that are captured under extremely low or non-uniform lighting conditions. We present an adaptive image enhancement algorithm based on Zone System. This study reveals hidden image details and increases the contrast of an image with low dynamic range. It is comprised two processes: adaptive luminance enhancement and adaptive contrast enhancement. The adaptive luminance enhancement algorithm is a global intensity transform function based on Zone System information. This process not only increases the luminance of darker pixels but also compresses the dynamic range of the image. The adaptive contrast enhancement adjusts the intensity of each pixel based on the discontinuities of the local luminance. It also improves the contrast of local region and reveals the details of image clearly. The proposed algorithm has good performance on enhancing contrast, preserving more detail of characteristics and sharpening edges of objects in experimental results. The performance with our proposed was better evaluation and comparison than other algorithms in the subjective and objective evaluation.","PeriodicalId":187974,"journal":{"name":"2013 International Conference on Parallel and Distributed Computing, Applications and Technologies","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116238058","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Scheduling applications efficiently on a network of computing systems is crucial for high performance. This problem is known to be NP-Hard and is further complicated when applied to a CPU-GPU heterogeneous environment. Heuristic algorithms like Heterogeneous Earliest Finish Time (HEFT) have shown to produce good results for other heterogeneous environments like Grids and Clusters. In this paper, we propose a novel optimization of this algorithm that takes advantage of dissimilar execution times of the processors in the chosen environment. We optimize both the task ranking as well as the processor selection steps of the HEFT algorithm. By balancing the locally optimal result with the globally optimal result, we show that performance can be improved significantly without any change in the complexity of the algorithm (as compared to HEFT). Using randomly generated Directed A cyclic Graphs (DAGs), the new algorithm HEFT-NC (No-Cross) is compared with HEFT both in terms of speedup and schedule length. We show that the HEFT-NC outperforms HEFT algorithm and is consistent across different graph shapes and task sizes.
{"title":"Optimization of the HEFT Algorithm for a CPU-GPU Environment","authors":"K. Shetti, Suhaib A. Fahmy, T. Bretschneider","doi":"10.1109/PDCAT.2013.40","DOIUrl":"https://doi.org/10.1109/PDCAT.2013.40","url":null,"abstract":"Scheduling applications efficiently on a network of computing systems is crucial for high performance. This problem is known to be NP-Hard and is further complicated when applied to a CPU-GPU heterogeneous environment. Heuristic algorithms like Heterogeneous Earliest Finish Time (HEFT) have shown to produce good results for other heterogeneous environments like Grids and Clusters. In this paper, we propose a novel optimization of this algorithm that takes advantage of dissimilar execution times of the processors in the chosen environment. We optimize both the task ranking as well as the processor selection steps of the HEFT algorithm. By balancing the locally optimal result with the globally optimal result, we show that performance can be improved significantly without any change in the complexity of the algorithm (as compared to HEFT). Using randomly generated Directed A cyclic Graphs (DAGs), the new algorithm HEFT-NC (No-Cross) is compared with HEFT both in terms of speedup and schedule length. We show that the HEFT-NC outperforms HEFT algorithm and is consistent across different graph shapes and task sizes.","PeriodicalId":187974,"journal":{"name":"2013 International Conference on Parallel and Distributed Computing, Applications and Technologies","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126839817","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
C. R. Valêncio, Guilherme Prióli Daniel, C. D. Medeiros, A. Cansian, L. Baida, Fernando Ferrari
Spatial data mining techniques enable the knowledge extraction from spatial databases. However, the high computational cost and the complexity of algorithms are some of the main problems in this area. This work proposes a new algorithm referred to as VDBSCAN+, which derived from the algorithm VDBSCAN (Varied Density Based Spatial Clustering of Applications with Noise) and focuses on the use of parallelism techniques in GPU (Graphics Processing Unit), obtaining a significant performance improvement, by increasing the runtime by 95% in comparison with VDBSCAN.
{"title":"VDBSCAN+: Performance Optimization Based on GPU Parallelism","authors":"C. R. Valêncio, Guilherme Prióli Daniel, C. D. Medeiros, A. Cansian, L. Baida, Fernando Ferrari","doi":"10.1109/PDCAT.2013.11","DOIUrl":"https://doi.org/10.1109/PDCAT.2013.11","url":null,"abstract":"Spatial data mining techniques enable the knowledge extraction from spatial databases. However, the high computational cost and the complexity of algorithms are some of the main problems in this area. This work proposes a new algorithm referred to as VDBSCAN+, which derived from the algorithm VDBSCAN (Varied Density Based Spatial Clustering of Applications with Noise) and focuses on the use of parallelism techniques in GPU (Graphics Processing Unit), obtaining a significant performance improvement, by increasing the runtime by 95% in comparison with VDBSCAN.","PeriodicalId":187974,"journal":{"name":"2013 International Conference on Parallel and Distributed Computing, Applications and Technologies","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131597237","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}