A novel dynamic step-size updating rule for online blind separation of single distribution source is presented in this paper. The rule dynamically divides the process of separation into two stages by using the independent measurement of the output signals, and respectively adopts corresponding step-size updating algorithm. The rule can duly detect whether the channel matrix is changed or not, so it can separate source signals effectively under the situation for variable channel or invariable channel. The simulations have verified that this rule is robust and retains not only the faster convergence rate but also the lower steady-state error.
{"title":"A Dynamic Step-Size Updating Rule for Single Distribution Blind Source Separation","authors":"Xuansen He","doi":"10.1109/IIH-MSP.2007.15","DOIUrl":"https://doi.org/10.1109/IIH-MSP.2007.15","url":null,"abstract":"A novel dynamic step-size updating rule for online blind separation of single distribution source is presented in this paper. The rule dynamically divides the process of separation into two stages by using the independent measurement of the output signals, and respectively adopts corresponding step-size updating algorithm. The rule can duly detect whether the channel matrix is changed or not, so it can separate source signals effectively under the situation for variable channel or invariable channel. The simulations have verified that this rule is robust and retains not only the faster convergence rate but also the lower steady-state error.","PeriodicalId":385132,"journal":{"name":"Third International Conference on Intelligent Information Hiding and Multimedia Signal Processing (IIH-MSP 2007)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130248785","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper proposes an algorithm to determine the camera parameters for coplanar camera calibration. The proposed method requires only one image to compute the camera parameters. The intermediate parameters defined in terms of the camera parameters are determined by a linear method. Then, the final intermediate parameters and the lens distortion coefficients are computed iteratively using a nonlinear optimization method in which the parameters obtained from the linear method are used as initial parameters. Finally, the camera parameters are determined from the final intermediate parameters. The proposed method is tested and compared with the conventional method using the synthetic images. The experimental results show that the camera parameters determined by the proposed method give accurate and stable.
{"title":"An Algorithm for Coplanar Camera Calibration","authors":"K. Sirisantisamrid, K. Tirasesth, T. Matsuura","doi":"10.1109/IIH-MSP.2007.72","DOIUrl":"https://doi.org/10.1109/IIH-MSP.2007.72","url":null,"abstract":"This paper proposes an algorithm to determine the camera parameters for coplanar camera calibration. The proposed method requires only one image to compute the camera parameters. The intermediate parameters defined in terms of the camera parameters are determined by a linear method. Then, the final intermediate parameters and the lens distortion coefficients are computed iteratively using a nonlinear optimization method in which the parameters obtained from the linear method are used as initial parameters. Finally, the camera parameters are determined from the final intermediate parameters. The proposed method is tested and compared with the conventional method using the synthetic images. The experimental results show that the camera parameters determined by the proposed method give accurate and stable.","PeriodicalId":385132,"journal":{"name":"Third International Conference on Intelligent Information Hiding and Multimedia Signal Processing (IIH-MSP 2007)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129724479","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2007-11-26DOI: 10.1109/IIH-MSP.2007.243
N. Aoki
This study investigates the potential of value-added speech communications by using steganography. The quality of speech communications based on VoIP may potentially be degraded by packet loss that is basically inevitable in best-effort communications. This study proposes a packet loss concealment technique for mitigating such degradation by using steganography. In addition, this study also proposes a band extension technique by using steganography.
{"title":"Potential of Value-Added Speech Communications by Using Steganography","authors":"N. Aoki","doi":"10.1109/IIH-MSP.2007.243","DOIUrl":"https://doi.org/10.1109/IIH-MSP.2007.243","url":null,"abstract":"This study investigates the potential of value-added speech communications by using steganography. The quality of speech communications based on VoIP may potentially be degraded by packet loss that is basically inevitable in best-effort communications. This study proposes a packet loss concealment technique for mitigating such degradation by using steganography. In addition, this study also proposes a band extension technique by using steganography.","PeriodicalId":385132,"journal":{"name":"Third International Conference on Intelligent Information Hiding and Multimedia Signal Processing (IIH-MSP 2007)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129836044","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2007-11-26DOI: 10.1109/IIH-MSP.2007.446
A. Ito, S. Makino
In this paper we investigated methods that increase correlation between two values using one or two bits of extra information. For methods that use one bit, we investigated '1-bit quantization,' 'sign correction' and 'difference quantization' methods. For those that use two bits, we investigated '2-bit quantization,' 'sign correction+difference quantization' and '2-bit difference quantization' methods. From theoretical analysis and numerical experiments, it has been found that the quantization-based method is best when correlation of the original data is weak, while 'difference quantization' or combination of sign correction is better when the original data have strong correlation. Then we applied the methods to multiple description coding of speech signals.
{"title":"Increasing Correlation using a Few Bits for Multiple Description Coding","authors":"A. Ito, S. Makino","doi":"10.1109/IIH-MSP.2007.446","DOIUrl":"https://doi.org/10.1109/IIH-MSP.2007.446","url":null,"abstract":"In this paper we investigated methods that increase correlation between two values using one or two bits of extra information. For methods that use one bit, we investigated '1-bit quantization,' 'sign correction' and 'difference quantization' methods. For those that use two bits, we investigated '2-bit quantization,' 'sign correction+difference quantization' and '2-bit difference quantization' methods. From theoretical analysis and numerical experiments, it has been found that the quantization-based method is best when correlation of the original data is weak, while 'difference quantization' or combination of sign correction is better when the original data have strong correlation. Then we applied the methods to multiple description coding of speech signals.","PeriodicalId":385132,"journal":{"name":"Third International Conference on Intelligent Information Hiding and Multimedia Signal Processing (IIH-MSP 2007)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129817848","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2007-11-26DOI: 10.1109/IIH-MSP.2007.409
Kan-Ru Chen, Chia-Te Chou, S. Shih, Wen-Shiung Chen, Duan-Yu Chen
In this paper, we proposed a method for selecting edge-type features for iris recognition. The AdaBoost algorithm is used to select a filter bank from a pile of filter candidates. The decisions of the weak classifiers associated with the filter bank are linearly combined to form a strong classifier. Real experiments have been conducted to assess the performance of the designed strong classifier. The results showed that the boosting algorithm can effectively improve the recognition accuracy at the cost of slightly increase the computation time.
{"title":"Feature Selection for Iris Recognition with AdaBoost","authors":"Kan-Ru Chen, Chia-Te Chou, S. Shih, Wen-Shiung Chen, Duan-Yu Chen","doi":"10.1109/IIH-MSP.2007.409","DOIUrl":"https://doi.org/10.1109/IIH-MSP.2007.409","url":null,"abstract":"In this paper, we proposed a method for selecting edge-type features for iris recognition. The AdaBoost algorithm is used to select a filter bank from a pile of filter candidates. The decisions of the weak classifiers associated with the filter bank are linearly combined to form a strong classifier. Real experiments have been conducted to assess the performance of the designed strong classifier. The results showed that the boosting algorithm can effectively improve the recognition accuracy at the cost of slightly increase the computation time.","PeriodicalId":385132,"journal":{"name":"Third International Conference on Intelligent Information Hiding and Multimedia Signal Processing (IIH-MSP 2007)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129927533","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2007-11-26DOI: 10.1109/IIH-MSP.2007.218
Chen Change Loy, W. Lai, C. Lim
This paper presents the development of a keystroke dynamics-based user authentication system using the ARTMAP-FD neural network. The effectiveness of ARTMAP- FD in classifying keystroke patterns is analyzed and compared against a number of widely used machine learning systems. The results show that ARTMAP-FD performs well against many of its counterparts in keystroke patterns classification. Apart from that, instead of using the conventional typing timing characteristics, the applicability of typing pressure to ascertaining user's identity is investigated. The experimental results show that combining both latency and pressure patterns can improve the equal error rate (ERR) of the system.
{"title":"Keystroke Patterns Classification Using the ARTMAP-FD Neural Network","authors":"Chen Change Loy, W. Lai, C. Lim","doi":"10.1109/IIH-MSP.2007.218","DOIUrl":"https://doi.org/10.1109/IIH-MSP.2007.218","url":null,"abstract":"This paper presents the development of a keystroke dynamics-based user authentication system using the ARTMAP-FD neural network. The effectiveness of ARTMAP- FD in classifying keystroke patterns is analyzed and compared against a number of widely used machine learning systems. The results show that ARTMAP-FD performs well against many of its counterparts in keystroke patterns classification. Apart from that, instead of using the conventional typing timing characteristics, the applicability of typing pressure to ascertaining user's identity is investigated. The experimental results show that combining both latency and pressure patterns can improve the equal error rate (ERR) of the system.","PeriodicalId":385132,"journal":{"name":"Third International Conference on Intelligent Information Hiding and Multimedia Signal Processing (IIH-MSP 2007)","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130871106","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
To overcome premature searching by standard particle swarm optimization (PSO) algorithm for the large lost in population diversity, the measure of population diversity and its calculation are given, and an adaptive PSO with dynamically changing inertia weight is proposed. Simulation results show that the adaptive PSO not only effectively alleviates the problem of premature convergence, but also has fast convergence speed for balancing the trade-off between exploration and exploitation.
{"title":"A Modified Particle Swarm Optimization Algorithm with Dynamic Adaptive","authors":"Yang Bo, Zhang Ding-xue, Liao Rui-quan","doi":"10.1109/IIH-MSP.2007.32","DOIUrl":"https://doi.org/10.1109/IIH-MSP.2007.32","url":null,"abstract":"To overcome premature searching by standard particle swarm optimization (PSO) algorithm for the large lost in population diversity, the measure of population diversity and its calculation are given, and an adaptive PSO with dynamically changing inertia weight is proposed. Simulation results show that the adaptive PSO not only effectively alleviates the problem of premature convergence, but also has fast convergence speed for balancing the trade-off between exploration and exploitation.","PeriodicalId":385132,"journal":{"name":"Third International Conference on Intelligent Information Hiding and Multimedia Signal Processing (IIH-MSP 2007)","volume":"90 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124859462","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The quality of iris image is the key point to affect the accuracy of iris recognition system. Selecting good iris images from a sequence frames can efficiently depress the false rejection rate and false acceptance rate of a recognition system. Four main causes from various factor-oriented abnormities often degrade iris image quality: defocus, motion blur, eyelid occlusion, and eyelash occlusion. Considering the texture distribution characteristics on iris images, a novel image quality assessment scheme is proposed corresponding to different situations. The experiment results show that the presented method is efficient and can work in realtime, and the evaluation results are coincident to the judgment of human eyes.
{"title":"A New Scheme of Iris Image Quality Assessment","authors":"Guangming Lu, Jiayin Qi, Q. Liao","doi":"10.1109/IIH-MSP.2007.45","DOIUrl":"https://doi.org/10.1109/IIH-MSP.2007.45","url":null,"abstract":"The quality of iris image is the key point to affect the accuracy of iris recognition system. Selecting good iris images from a sequence frames can efficiently depress the false rejection rate and false acceptance rate of a recognition system. Four main causes from various factor-oriented abnormities often degrade iris image quality: defocus, motion blur, eyelid occlusion, and eyelash occlusion. Considering the texture distribution characteristics on iris images, a novel image quality assessment scheme is proposed corresponding to different situations. The experiment results show that the presented method is efficient and can work in realtime, and the evaluation results are coincident to the judgment of human eyes.","PeriodicalId":385132,"journal":{"name":"Third International Conference on Intelligent Information Hiding and Multimedia Signal Processing (IIH-MSP 2007)","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126438853","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2007-11-26DOI: 10.1109/IIH-MSP.2007.432
A. Kunisa
This paper presents a new metadata embedding framework for multimedia content where the metadata and its related information are registered in a database. A watermarking method based on the framework computes the metadata that provides content with the smallest perceptual artifact by way of extracting the metadata from the original unwatermarked content. This can reduce the watermark power at the embedding because the content itself naturally contains the metadata. In the case that the extracted metadata has already been registered in the database, several bits in the metadata are inverted so as to avoid the double metadata registration and also detect it with probability higher than the other unregistered metadata.
{"title":"Host-Cooperative Metadata Embedding Framework","authors":"A. Kunisa","doi":"10.1109/IIH-MSP.2007.432","DOIUrl":"https://doi.org/10.1109/IIH-MSP.2007.432","url":null,"abstract":"This paper presents a new metadata embedding framework for multimedia content where the metadata and its related information are registered in a database. A watermarking method based on the framework computes the metadata that provides content with the smallest perceptual artifact by way of extracting the metadata from the original unwatermarked content. This can reduce the watermark power at the embedding because the content itself naturally contains the metadata. In the case that the extracted metadata has already been registered in the database, several bits in the metadata are inverted so as to avoid the double metadata registration and also detect it with probability higher than the other unregistered metadata.","PeriodicalId":385132,"journal":{"name":"Third International Conference on Intelligent Information Hiding and Multimedia Signal Processing (IIH-MSP 2007)","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126254679","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2007-11-26DOI: 10.1109/IIH-MSP.2007.325
W. Fang
We proposed in this paper a brand new type of visual cryptography (VC), namely, the VC in reversible style. For any two given secret images, two corresponding transparencies S1 and S2, also known as shares, can be produced. Both transparencies look noisy. However, if we stack the front views of both transparencies, then the first secret image is unveiled. On the other hand, if we stack the front view of S1 with the back view (the turn-over) of S2, then the second secret image is unveiled.
{"title":"Visual Cryptography in reversible style","authors":"W. Fang","doi":"10.1109/IIH-MSP.2007.325","DOIUrl":"https://doi.org/10.1109/IIH-MSP.2007.325","url":null,"abstract":"We proposed in this paper a brand new type of visual cryptography (VC), namely, the VC in reversible style. For any two given secret images, two corresponding transparencies S1 and S2, also known as shares, can be produced. Both transparencies look noisy. However, if we stack the front views of both transparencies, then the first secret image is unveiled. On the other hand, if we stack the front view of S1 with the back view (the turn-over) of S2, then the second secret image is unveiled.","PeriodicalId":385132,"journal":{"name":"Third International Conference on Intelligent Information Hiding and Multimedia Signal Processing (IIH-MSP 2007)","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127215742","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}