The aim of this paper is to analyze the efficiency of the QSQN method, which was proposed by us and Nguyen in [10] for evaluating queries to Horn knowledge bases. In order to compare QSQN with the well-known methods QSQR and the one based on the Magic-Set transformation, we have implemented all of these methods. We compare them using representative examples that appear in many articles on deductive databases. Our experimental results show that the QSQN method usually outperforms the two other methods. Apart from the experimental results, we also explain the reasons behind the good performance of QSQN.
{"title":"On the efficiency of query-subquery nets: an experimental point of view","authors":"S. Cao","doi":"10.1145/2542050.2542085","DOIUrl":"https://doi.org/10.1145/2542050.2542085","url":null,"abstract":"The aim of this paper is to analyze the efficiency of the QSQN method, which was proposed by us and Nguyen in [10] for evaluating queries to Horn knowledge bases. In order to compare QSQN with the well-known methods QSQR and the one based on the Magic-Set transformation, we have implemented all of these methods. We compare them using representative examples that appear in many articles on deductive databases. Our experimental results show that the QSQN method usually outperforms the two other methods. Apart from the experimental results, we also explain the reasons behind the good performance of QSQN.","PeriodicalId":246033,"journal":{"name":"Proceedings of the 4th Symposium on Information and Communication Technology","volume":"284 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131604489","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recently, Graphics Processing Units(GPUs) have emerged as a very promisingly powerful resource in scientific computing. Algorithmic Differentiation is a technique to numerically evaluate first and higher derivatives of a function specified by a computer program efficiently up to machine precision. Derivative programs which are used to compute derivatives of functions are so-called tangent-linear program and adjoint program. This paper aims to offload any particular independent loop in tangent-linear program to GPUs. The proposed technique is OpenACC APIs for annotating an independent loop to be executed in parallel on GPUs. Our case study for OpenACC tangent-linear code shows an enormous speedup. OpenACC shows its simplicity of accelerating tangent-linear code by hiding the data movement between CPU and GPU memory.
{"title":"Towards tangent-linear GPU programs using OpenACC","authors":"B. T. Minh, Michael Förster, U. Naumann","doi":"10.1145/2542050.2542059","DOIUrl":"https://doi.org/10.1145/2542050.2542059","url":null,"abstract":"Recently, Graphics Processing Units(GPUs) have emerged as a very promisingly powerful resource in scientific computing. Algorithmic Differentiation is a technique to numerically evaluate first and higher derivatives of a function specified by a computer program efficiently up to machine precision. Derivative programs which are used to compute derivatives of functions are so-called tangent-linear program and adjoint program. This paper aims to offload any particular independent loop in tangent-linear program to GPUs. The proposed technique is OpenACC APIs for annotating an independent loop to be executed in parallel on GPUs. Our case study for OpenACC tangent-linear code shows an enormous speedup. OpenACC shows its simplicity of accelerating tangent-linear code by hiding the data movement between CPU and GPU memory.","PeriodicalId":246033,"journal":{"name":"Proceedings of the 4th Symposium on Information and Communication Technology","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115628154","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Auto-Encoders can learn features similar to Sparse Coding, but the training can be done efficiently via the back-propagation algorithm as well as the features can be computed quickly for a new input. However, in practice, it is not easy to get Sparse Auto-Encoders working; there are two things that need investigating: sparsity constraint and weight constraint. In this paper, we try to understand the problem of training Sparse Auto-Encoders with L1-norm sparsity penalty, and propose a modified version of Stochastic Gradient Descent algorithm, called Sleep-Wake Stochastic Gradient Descent (SW-SGD), to solve this problem. Here, we focus on Sparse Auto-Encoders with rectified linear units in the hidden layer, called Sparse Rectified Auto-Encoders (SRAEs), because such units compute fast and can produce true sparsity (exact zeros). In addition, we propose a new reasonable way to constrain SRAEs' weights. Experiments on MNIST dataset show that the proposed weight constraint and SW-SGD help SRAEs successfully learn meaningful features that give excellent performance on classification task compared to other Auto-Encoder variants.
{"title":"Demystifying sparse rectified auto-encoders","authors":"Kien Tran, H. Le","doi":"10.1145/2542050.2542065","DOIUrl":"https://doi.org/10.1145/2542050.2542065","url":null,"abstract":"Auto-Encoders can learn features similar to Sparse Coding, but the training can be done efficiently via the back-propagation algorithm as well as the features can be computed quickly for a new input. However, in practice, it is not easy to get Sparse Auto-Encoders working; there are two things that need investigating: sparsity constraint and weight constraint. In this paper, we try to understand the problem of training Sparse Auto-Encoders with L1-norm sparsity penalty, and propose a modified version of Stochastic Gradient Descent algorithm, called Sleep-Wake Stochastic Gradient Descent (SW-SGD), to solve this problem. Here, we focus on Sparse Auto-Encoders with rectified linear units in the hidden layer, called Sparse Rectified Auto-Encoders (SRAEs), because such units compute fast and can produce true sparsity (exact zeros). In addition, we propose a new reasonable way to constrain SRAEs' weights. Experiments on MNIST dataset show that the proposed weight constraint and SW-SGD help SRAEs successfully learn meaningful features that give excellent performance on classification task compared to other Auto-Encoder variants.","PeriodicalId":246033,"journal":{"name":"Proceedings of the 4th Symposium on Information and Communication Technology","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130174027","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Thi Bich Ngoc Do, Takashi Kitamura, Nguyen Van Tang, G. Hatayama, Shin Sakuragi, H. Ohsaki
In our previous work [17], we proposed a model-based combinatorial testing method, called FOT. It provides a technique to design test-models for combinatorial testing based on extended logic trees. In this paper, we introduce pair-wise testing (and by extension, n-wise testing, where n = 1, 2, ...) to FOT, by developing a technique to construct a test-suite of n-wise strategies from the test models in FOT. We take a "transformation approach" to realize this technique. To construct test suites, this approach first transforms test-models in FOT, represented as extended logic trees, to those in the formats which the existing n-wise testing tools (such as PICT [9], ACTS [30], CIT-BACH [31], etc.) accept to input, and then applies transformed test-models to any of these tools. In this transformation approach, an algorithm, called "flattening algorithm", plays a key role. We prove the correctness of the algorithm, and implement the algorithm to automate such test-suite constructions, providing a tool called FOT-nw (FOT with n-wise). Further, to show the effectiveness of the technique, we conduct a case study, where we apply FOT-nw to design test models and automatically construct test suites of n-wise strategies for an embedded system of stationary services for real-use in industry.
{"title":"Constructing test cases for n-wise testing from tree-based test models","authors":"Thi Bich Ngoc Do, Takashi Kitamura, Nguyen Van Tang, G. Hatayama, Shin Sakuragi, H. Ohsaki","doi":"10.1145/2542050.2542074","DOIUrl":"https://doi.org/10.1145/2542050.2542074","url":null,"abstract":"In our previous work [17], we proposed a model-based combinatorial testing method, called FOT. It provides a technique to design test-models for combinatorial testing based on extended logic trees. In this paper, we introduce pair-wise testing (and by extension, n-wise testing, where n = 1, 2, ...) to FOT, by developing a technique to construct a test-suite of n-wise strategies from the test models in FOT. We take a \"transformation approach\" to realize this technique. To construct test suites, this approach first transforms test-models in FOT, represented as extended logic trees, to those in the formats which the existing n-wise testing tools (such as PICT [9], ACTS [30], CIT-BACH [31], etc.) accept to input, and then applies transformed test-models to any of these tools. In this transformation approach, an algorithm, called \"flattening algorithm\", plays a key role. We prove the correctness of the algorithm, and implement the algorithm to automate such test-suite constructions, providing a tool called FOT-nw (FOT with n-wise). Further, to show the effectiveness of the technique, we conduct a case study, where we apply FOT-nw to design test models and automatically construct test suites of n-wise strategies for an embedded system of stationary services for real-use in industry.","PeriodicalId":246033,"journal":{"name":"Proceedings of the 4th Symposium on Information and Communication Technology","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129196128","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bit allocation is an essential issue for the rate-control performance of an H.264 scalable video encoder. In this paper, we propose an efficient bit allocation algorithm at the frame level for H.264 temporal scalable video coding. The bit budget for temporal layers is based on the target bit rate, the hierarchical level, the buffer constraints and the predicted mean absolute difference (MAD) of the current frame. This bit allocation algorithm is also extended to the spatial enhancement layers by considering the inter-layer MAD prediction and the relationship between the inter-layer target bit rates. Experimental results show that the proposed algorithm efficiently prevents the buffer overflow and underflow, achieves the acceptable accurate bitrates (with DBR less than 2%), and better visual quality, as compared to the state-of-the-art approaches in the literature.
{"title":"A better bit-allocation algorithm for H.264/SVC","authors":"Vo Phuong Binh, Shih-Hsuan Yang","doi":"10.1145/2542050.2542067","DOIUrl":"https://doi.org/10.1145/2542050.2542067","url":null,"abstract":"Bit allocation is an essential issue for the rate-control performance of an H.264 scalable video encoder. In this paper, we propose an efficient bit allocation algorithm at the frame level for H.264 temporal scalable video coding. The bit budget for temporal layers is based on the target bit rate, the hierarchical level, the buffer constraints and the predicted mean absolute difference (MAD) of the current frame. This bit allocation algorithm is also extended to the spatial enhancement layers by considering the inter-layer MAD prediction and the relationship between the inter-layer target bit rates. Experimental results show that the proposed algorithm efficiently prevents the buffer overflow and underflow, achieves the acceptable accurate bitrates (with DBR less than 2%), and better visual quality, as compared to the state-of-the-art approaches in the literature.","PeriodicalId":246033,"journal":{"name":"Proceedings of the 4th Symposium on Information and Communication Technology","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116774858","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}