Pub Date : 2010-10-21DOI: 10.1109/CEEC.2010.5606482
A. Kattan
Researchers have classically addressed the problem of universal compression using two approaches. The first approach has been to develop adaptive compression algorithms, where the system changes its behaviour during the compression to fit the encoding situation of the given data. The second approach has been to use the composition of multiple compression algorithms. Recently, however, a third approach has been adopted by researchers in order to develop compression systems: the application of computational intelligence paradigms. This has shown remarkable results in the data compression domain improving the decision making process and outperforming conventional systems of data compression. This paper reviews some of the previous attempts to address the universal compression problem within conventional and computational intelligence techniques.
{"title":"Universal intelligent data compression systems: A review","authors":"A. Kattan","doi":"10.1109/CEEC.2010.5606482","DOIUrl":"https://doi.org/10.1109/CEEC.2010.5606482","url":null,"abstract":"Researchers have classically addressed the problem of universal compression using two approaches. The first approach has been to develop adaptive compression algorithms, where the system changes its behaviour during the compression to fit the encoding situation of the given data. The second approach has been to use the composition of multiple compression algorithms. Recently, however, a third approach has been adopted by researchers in order to develop compression systems: the application of computational intelligence paradigms. This has shown remarkable results in the data compression domain improving the decision making process and outperforming conventional systems of data compression. This paper reviews some of the previous attempts to address the universal compression problem within conventional and computational intelligence techniques.","PeriodicalId":175099,"journal":{"name":"2010 2nd Computer Science and Electronic Engineering Conference (CEEC)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133552906","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-10-21DOI: 10.1109/CEEC.2010.5606486
I. Al-Mejibli, M. Colley
Examining a protocol is necessary in order to identify its advantages and disadvantages precisely. Here we apply UPnP technology on an Ethernet network to evaluate its performance and identify its positive and negative sides. As it is known that UPnP can be run on many media that support IP such as Ethernet, Bluetooth and Wi-Fi. This paper produces an evaluation of UPnP service discovery protocol on different network sizes and shape by using NS2 simulator. This evaluation includes an assessment to the required transmission time which is required in the sending and receiving of messages for discovering all services in a defined network, and the cost for each examined network size to achieve this process. Cost means the required number of messages that should be transmitted to discover all services in the defined network. In addition to, examine the effect of the network shape to the performance of the UPnP protocol.
{"title":"Evaluating UPnP service discovery protocols by using NS2 simulator","authors":"I. Al-Mejibli, M. Colley","doi":"10.1109/CEEC.2010.5606486","DOIUrl":"https://doi.org/10.1109/CEEC.2010.5606486","url":null,"abstract":"Examining a protocol is necessary in order to identify its advantages and disadvantages precisely. Here we apply UPnP technology on an Ethernet network to evaluate its performance and identify its positive and negative sides. As it is known that UPnP can be run on many media that support IP such as Ethernet, Bluetooth and Wi-Fi. This paper produces an evaluation of UPnP service discovery protocol on different network sizes and shape by using NS2 simulator. This evaluation includes an assessment to the required transmission time which is required in the sending and receiving of messages for discovering all services in a defined network, and the cost for each examined network size to achieve this process. Cost means the required number of messages that should be transmitted to discover all services in the defined network. In addition to, examine the effect of the network shape to the performance of the UPnP protocol.","PeriodicalId":175099,"journal":{"name":"2010 2nd Computer Science and Electronic Engineering Conference (CEEC)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127821548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-10-21DOI: 10.1109/CEEC.2010.5606497
Thomas Warsop, Sameer Singh
Three-dimensional scene recovery from two-dimensional image data is a challenging task. Typical methods compute two-dimensional image feature correspondences between image frames. However, these methods introduce errors as three-dimensional warping of image elements is not considered. Further, no previous method presented exploits the strong correlation between three-dimensional scene elements in adjacent video frames. In this work, we describe a simple three-dimensional scene recovery method which is extended using a novel extension exploiting temporal information, increasing efficiency of the method. The proposed method is applied to the recovery of outdoor scenes. Comparing with other, more traditional, three-dimensional recovery methods, the proposed method provides more accurate results and the addition of temporal information is shown to speed up execution without reducing accuracy.
{"title":"Reducing search space traversal in 3D scene reconstruction","authors":"Thomas Warsop, Sameer Singh","doi":"10.1109/CEEC.2010.5606497","DOIUrl":"https://doi.org/10.1109/CEEC.2010.5606497","url":null,"abstract":"Three-dimensional scene recovery from two-dimensional image data is a challenging task. Typical methods compute two-dimensional image feature correspondences between image frames. However, these methods introduce errors as three-dimensional warping of image elements is not considered. Further, no previous method presented exploits the strong correlation between three-dimensional scene elements in adjacent video frames. In this work, we describe a simple three-dimensional scene recovery method which is extended using a novel extension exploiting temporal information, increasing efficiency of the method. The proposed method is applied to the recovery of outdoor scenes. Comparing with other, more traditional, three-dimensional recovery methods, the proposed method provides more accurate results and the addition of temporal information is shown to speed up execution without reducing accuracy.","PeriodicalId":175099,"journal":{"name":"2010 2nd Computer Science and Electronic Engineering Conference (CEEC)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127185676","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-10-21DOI: 10.1109/CEEC.2010.5606499
Seokhyun Han
This paper is a research on functional interpretation of object-oriented programs in the intensional type theory with dependent record types and coercive subtyping. We are here simulating a type-theoretic model of Java programs in Coq. Representing a class and its interface-type, which declares a set of methods and their signatures for code reuse, as dependent record types, the type-theoretic encoding enjoys desirable subtyping relationships that correctly capture the important object-oriented features such as inheritance, subtype polymorphism and dynamic dispatch. Furthermore, since the model is given in the intensional type theory, machine-supported verification of Java programs can be done by proving specifications that is satisfied by Java programs in Coq with regard to the state of objects.
{"title":"Verification of Java programs in Coq","authors":"Seokhyun Han","doi":"10.1109/CEEC.2010.5606499","DOIUrl":"https://doi.org/10.1109/CEEC.2010.5606499","url":null,"abstract":"This paper is a research on functional interpretation of object-oriented programs in the intensional type theory with dependent record types and coercive subtyping. We are here simulating a type-theoretic model of Java programs in Coq. Representing a class and its interface-type, which declares a set of methods and their signatures for code reuse, as dependent record types, the type-theoretic encoding enjoys desirable subtyping relationships that correctly capture the important object-oriented features such as inheritance, subtype polymorphism and dynamic dispatch. Furthermore, since the model is given in the intensional type theory, machine-supported verification of Java programs can be done by proving specifications that is satisfied by Java programs in Coq with regard to the state of objects.","PeriodicalId":175099,"journal":{"name":"2010 2nd Computer Science and Electronic Engineering Conference (CEEC)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122526124","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-10-21DOI: 10.1109/CEEC.2010.5606490
A. A. Patil, R. Singhai, J. Singhai
Super-Resolution (SR) is an approach used to restore High-Resolution (HR) image from one or more Low-Resolution (LR) images. The quality of reconstructed SR image obtained from a set of LR images depends upon the registration accuracy of LR images. However, the HR images can be reconstructed accurately by estimating sub-pixel displacement of image grid of the shifted LR image. In this paper an approach of reconstruction of SR image using a sub-pixel shift image registration and Curvelet Transform (CT) for interpolation is proposed. The curvelet transform is multiscale pyramid which provides optimally sparse representation of objects. Image interpolation is performed at the finest level in Curvelet domain. The experimental results demonstrate that Curvelet Transform performs better as compared to Stationary Wavelet Transform. Also, it is experimentally verified that the computational complexity of the SR algorithm is also reduced by using CT for interpolation.
{"title":"Curvelet transform based super-resolution using sub-pixel image registration","authors":"A. A. Patil, R. Singhai, J. Singhai","doi":"10.1109/CEEC.2010.5606490","DOIUrl":"https://doi.org/10.1109/CEEC.2010.5606490","url":null,"abstract":"Super-Resolution (SR) is an approach used to restore High-Resolution (HR) image from one or more Low-Resolution (LR) images. The quality of reconstructed SR image obtained from a set of LR images depends upon the registration accuracy of LR images. However, the HR images can be reconstructed accurately by estimating sub-pixel displacement of image grid of the shifted LR image. In this paper an approach of reconstruction of SR image using a sub-pixel shift image registration and Curvelet Transform (CT) for interpolation is proposed. The curvelet transform is multiscale pyramid which provides optimally sparse representation of objects. Image interpolation is performed at the finest level in Curvelet domain. The experimental results demonstrate that Curvelet Transform performs better as compared to Stationary Wavelet Transform. Also, it is experimentally verified that the computational complexity of the SR algorithm is also reduced by using CT for interpolation.","PeriodicalId":175099,"journal":{"name":"2010 2nd Computer Science and Electronic Engineering Conference (CEEC)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131209260","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-10-21DOI: 10.1109/CEEC.2010.5606483
L. Al-Jobouri, M. Fleury, M. Ghanbari
Intra-refresh macroblocks are normally provided in mobile broadband wireless access to avoid the effect of temporal error propagation. The questions then arise are: in what form should the refresh take place; and what percentage of refresh is necessary. This paper is a study of intra-refresh provision in the context of a robust video streaming scheme. The scheme combines data-partitioned video compression with adaptive channel coding and redundant packets. The main conclusions from a detailed analysis are that: because of the effect on packet size it is important to select a moderate quantization parameter; and because of the higher overhead from cyclic intra macroblock line update it is better to select a low percentage per frame of intra-refresh macroblocks. In harsh channel conditions from the combined effect of slow and fast fading producing 'bursty' errors, all the proposed measures are necessary but then periodic intra-refresh can be avoided with its sudden increases in the data-rate if the proposed levels of intra-refresh macroblocks are applied.
{"title":"Impact of intra-refresh provision on a data-partitioned wireless broadband video streaming scheme","authors":"L. Al-Jobouri, M. Fleury, M. Ghanbari","doi":"10.1109/CEEC.2010.5606483","DOIUrl":"https://doi.org/10.1109/CEEC.2010.5606483","url":null,"abstract":"Intra-refresh macroblocks are normally provided in mobile broadband wireless access to avoid the effect of temporal error propagation. The questions then arise are: in what form should the refresh take place; and what percentage of refresh is necessary. This paper is a study of intra-refresh provision in the context of a robust video streaming scheme. The scheme combines data-partitioned video compression with adaptive channel coding and redundant packets. The main conclusions from a detailed analysis are that: because of the effect on packet size it is important to select a moderate quantization parameter; and because of the higher overhead from cyclic intra macroblock line update it is better to select a low percentage per frame of intra-refresh macroblocks. In harsh channel conditions from the combined effect of slow and fast fading producing 'bursty' errors, all the proposed measures are necessary but then periodic intra-refresh can be avoided with its sudden increases in the data-rate if the proposed levels of intra-refresh macroblocks are applied.","PeriodicalId":175099,"journal":{"name":"2010 2nd Computer Science and Electronic Engineering Conference (CEEC)","volume":"91 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127426414","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-10-21DOI: 10.1109/CEEC.2010.5606493
Arthorn Luangsodsai, C. Fox
The paper describes a system for slicing concurrent statecharts. Slicing seeks to remove those parts of a statechart that are not relevant for a given criteria. The technique can be applied to support model-based analysis, testing, debugging and maintenance of embedded systems and reactive systems. An And-Or dependence graph is used to represent the control and data dependencies of statecharts. The slicing algorithm determines the slice by traversing the dependence graph from a point that is specified by the slicing criteria. We deal with concurrent statecharts by taking into account of interference dependencies including parallel control dependence, interference control dependence and interference data dependence.
{"title":"Concurrent statechart slicing","authors":"Arthorn Luangsodsai, C. Fox","doi":"10.1109/CEEC.2010.5606493","DOIUrl":"https://doi.org/10.1109/CEEC.2010.5606493","url":null,"abstract":"The paper describes a system for slicing concurrent statecharts. Slicing seeks to remove those parts of a statechart that are not relevant for a given criteria. The technique can be applied to support model-based analysis, testing, debugging and maintenance of embedded systems and reactive systems. An And-Or dependence graph is used to represent the control and data dependencies of statecharts. The slicing algorithm determines the slice by traversing the dependence graph from a point that is specified by the slicing criteria. We deal with concurrent statecharts by taking into account of interference dependencies including parallel control dependence, interference control dependence and interference data dependence.","PeriodicalId":175099,"journal":{"name":"2010 2nd Computer Science and Electronic Engineering Conference (CEEC)","volume":"120 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127285400","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-10-21DOI: 10.1109/CEEC.2010.5606489
Adam J. Hill, M. Hawksford
Bandwidth extension of a constrained loudspeaker system is regularly achieved employing nonlinear bass synthesis. The method operates on the doctrine of the missing fundamental whereby humans infer the presence of a fundamental tone when presented with a signal consisting of higher harmonics of said tone. Nonlinear devices and phase vocoders are commonly used for signal generation; both exhibiting deficiencies. A system is proposed where the two approaches are used in tandem via a mixing algorithm to suppress these deficiencies. Mixing is performed by signal transient content analysis in the frequency domain using constant-Q transforms. The hybrid approach is rated subjectively against various nonlinear device and phase vocoder techniques using the MUSHRA test method.
{"title":"A hybrid virtual bass system for optimized steady-state and transient performance","authors":"Adam J. Hill, M. Hawksford","doi":"10.1109/CEEC.2010.5606489","DOIUrl":"https://doi.org/10.1109/CEEC.2010.5606489","url":null,"abstract":"Bandwidth extension of a constrained loudspeaker system is regularly achieved employing nonlinear bass synthesis. The method operates on the doctrine of the missing fundamental whereby humans infer the presence of a fundamental tone when presented with a signal consisting of higher harmonics of said tone. Nonlinear devices and phase vocoders are commonly used for signal generation; both exhibiting deficiencies. A system is proposed where the two approaches are used in tandem via a mixing algorithm to suppress these deficiencies. Mixing is performed by signal transient content analysis in the frequency domain using constant-Q transforms. The hybrid approach is rated subjectively against various nonlinear device and phase vocoder techniques using the MUSHRA test method.","PeriodicalId":175099,"journal":{"name":"2010 2nd Computer Science and Electronic Engineering Conference (CEEC)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121221725","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-10-21DOI: 10.1109/CEEC.2010.5606492
Philips S. Ogun, M. Jackson, R. Parkin
In-process surface inspection during wood machining has attracted great interest in recent years due to the growing desire to minimise wastage and the increasing demand for higher quality wooden products. In order to take advantage of the developments in machine vision technologies for surface quality inspection, it is necessary to investigate the reflectance properties of wood so that standard machine-vision based assessment methods can be established. This paper describes a method for estimating the surface albedo of wood using photometric stereo technique. It is revealed that the albedo of timber is highly variable, which can be attributed to its hygroscopic and anisotropic material nature.
{"title":"Determination of the surface reflectance properties of timber using photometric stereo technique","authors":"Philips S. Ogun, M. Jackson, R. Parkin","doi":"10.1109/CEEC.2010.5606492","DOIUrl":"https://doi.org/10.1109/CEEC.2010.5606492","url":null,"abstract":"In-process surface inspection during wood machining has attracted great interest in recent years due to the growing desire to minimise wastage and the increasing demand for higher quality wooden products. In order to take advantage of the developments in machine vision technologies for surface quality inspection, it is necessary to investigate the reflectance properties of wood so that standard machine-vision based assessment methods can be established. This paper describes a method for estimating the surface albedo of wood using photometric stereo technique. It is revealed that the albedo of timber is highly variable, which can be attributed to its hygroscopic and anisotropic material nature.","PeriodicalId":175099,"journal":{"name":"2010 2nd Computer Science and Electronic Engineering Conference (CEEC)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131117175","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-10-21DOI: 10.1109/CEEC.2010.5606495
J. Nehinbe
Network forensics are challenging because of numerous quantities of low level alerts that are generated by network intrusion detectors generate to achieve high detection rates. However, clustering analyses are insufficient to establish overall patterns, sequential dependencies and precise classifications of attacks embedded in of low level alerts. This is because there are several ways to cluster a set of alerts especially if the alerts contain clustering criteria that have several values. Consequently, it is difficult to promptly select an appropriate clustering technique for investigating computer attacks and to concurrently handle the tradeoffs between interpretations and clustering of low level alerts effectively. Accordingly, alerts, attacks and corresponding countermeasures are frequently mismatched. Hence, several realistic attacks easily circumvent early detections. Therefore, in this paper, intrusive alerts were clustered and the quality of each cluster was evaluated. The results demonstrate how a measure of entropy can be used to establish suitable clustering technique for investigating computer attacks.
{"title":"Optimised clustering method for reducing challenges of network forensics","authors":"J. Nehinbe","doi":"10.1109/CEEC.2010.5606495","DOIUrl":"https://doi.org/10.1109/CEEC.2010.5606495","url":null,"abstract":"Network forensics are challenging because of numerous quantities of low level alerts that are generated by network intrusion detectors generate to achieve high detection rates. However, clustering analyses are insufficient to establish overall patterns, sequential dependencies and precise classifications of attacks embedded in of low level alerts. This is because there are several ways to cluster a set of alerts especially if the alerts contain clustering criteria that have several values. Consequently, it is difficult to promptly select an appropriate clustering technique for investigating computer attacks and to concurrently handle the tradeoffs between interpretations and clustering of low level alerts effectively. Accordingly, alerts, attacks and corresponding countermeasures are frequently mismatched. Hence, several realistic attacks easily circumvent early detections. Therefore, in this paper, intrusive alerts were clustered and the quality of each cluster was evaluated. The results demonstrate how a measure of entropy can be used to establish suitable clustering technique for investigating computer attacks.","PeriodicalId":175099,"journal":{"name":"2010 2nd Computer Science and Electronic Engineering Conference (CEEC)","volume":"143 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133230200","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}