Pub Date : 2001-07-04DOI: 10.1109/EURCON.2001.938138
G. Caccia, R. Lancini
Digital watermarking, or information hiding, refers to techniques for embedding data into other data. In this paper, we consider the case in which the embedding data are video sequences; in particular, we concentrate on the problem of digital watermarking in the bit stream domain. Applications are not limited to copyright protection and user identification. For example, one could think of using information hiding techniques in order to give a description of the scene that could be used for video indexing; other applications include subtitling, multi-lingual services, teletext, etc. In many of these, it is desirable for the data hiding scheme to be able to work in the bit stream domain, since it is likely that the original sequences are stored in compressed format. The main characteristics of digital watermarks are invisibility and robustness. For invisibility, we mean that the watermark must not impair the visual quality of the original video. For robustness, we mean that the embedded information should hardly be erased without destroying the original data, resisting against processing that could, intentionally or not, tamper with it. The purpose of this paper is to introduce a method able to embed a certain amount of bits per frame in MPEG-2 coded video sequences, acting directly in the bit-stream domain. These bits could be used for any purpose for which the offered bandwidth could be wide enough.
{"title":"Data hiding in MPEG-2 bit stream domain","authors":"G. Caccia, R. Lancini","doi":"10.1109/EURCON.2001.938138","DOIUrl":"https://doi.org/10.1109/EURCON.2001.938138","url":null,"abstract":"Digital watermarking, or information hiding, refers to techniques for embedding data into other data. In this paper, we consider the case in which the embedding data are video sequences; in particular, we concentrate on the problem of digital watermarking in the bit stream domain. Applications are not limited to copyright protection and user identification. For example, one could think of using information hiding techniques in order to give a description of the scene that could be used for video indexing; other applications include subtitling, multi-lingual services, teletext, etc. In many of these, it is desirable for the data hiding scheme to be able to work in the bit stream domain, since it is likely that the original sequences are stored in compressed format. The main characteristics of digital watermarks are invisibility and robustness. For invisibility, we mean that the watermark must not impair the visual quality of the original video. For robustness, we mean that the embedded information should hardly be erased without destroying the original data, resisting against processing that could, intentionally or not, tamper with it. The purpose of this paper is to introduce a method able to embed a certain amount of bits per frame in MPEG-2 coded video sequences, acting directly in the bit-stream domain. These bits could be used for any purpose for which the offered bandwidth could be wide enough.","PeriodicalId":205662,"journal":{"name":"EUROCON'2001. International Conference on Trends in Communications. Technical Program, Proceedings (Cat. No.01EX439)","volume":"154 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115693717","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2001-07-04DOI: 10.1109/EURCON.2001.937752
D. Straussnigg, A. Kozarev, H. Schenk, J. Bodner
In this paper, a new approach to time-domain equalization for a multicarrier transmission system is described. The main idea behind our method is to enhance the performance and simplify the complexity and implementation cost of the well-known FIR-based time equalization algorithm (TEQ), by using a simple high-pass filter for channel response shortening and improving the frequency equalization effects with additional compensation of the remaining transient in a DMT (discrete multitone modulation) symbol. Computer simulations were performed in a DMT-based ADSL simulation platform where three different configurations were tested: (1) 35-tap FIR-based TEQ, (2) shortening filter-based TEQ and (3) shortening filter and transient compensation-based TEQ.
{"title":"New approach to time-domain equalization and frequency-domain transient compensation for a DMT-based ADSL system","authors":"D. Straussnigg, A. Kozarev, H. Schenk, J. Bodner","doi":"10.1109/EURCON.2001.937752","DOIUrl":"https://doi.org/10.1109/EURCON.2001.937752","url":null,"abstract":"In this paper, a new approach to time-domain equalization for a multicarrier transmission system is described. The main idea behind our method is to enhance the performance and simplify the complexity and implementation cost of the well-known FIR-based time equalization algorithm (TEQ), by using a simple high-pass filter for channel response shortening and improving the frequency equalization effects with additional compensation of the remaining transient in a DMT (discrete multitone modulation) symbol. Computer simulations were performed in a DMT-based ADSL simulation platform where three different configurations were tested: (1) 35-tap FIR-based TEQ, (2) shortening filter-based TEQ and (3) shortening filter and transient compensation-based TEQ.","PeriodicalId":205662,"journal":{"name":"EUROCON'2001. International Conference on Trends in Communications. Technical Program, Proceedings (Cat. No.01EX439)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130953607","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2001-07-04DOI: 10.1109/EURCON.2001.938145
A. Mohamed
This work is based on the reconfiguration of the NetKuang to improve its performance. It operates on computer networks employing the UNIX OS. It detects vulnerabilities in poor system configurations. So this work is not only capable of searching a large number of hosts in parallel, but it also considers potential configuration vulnerabilities present in the network. The main disadvantage of NetKuang is that it can only develop one vulnerability at a time on a given system. Furthermore, there is a leak in memory when running a task. Our work aims at developing more than one vulnerability at a time. Vulnerabilities are discovered using a backward goal-based technique. That is, inducing a different search technique based on a genetic algorithm to discover the vulnerabilities. We aim at using genetic algorithms to point out several vulnerabilities simultaneously. Moreover, our technique overcomes most of the previously mentioned disadvantages produced by the standard technique. Considering the genetic algorithm, there are two different ways that the search would apply: the simple genetic algorithms, and the classifier genetic algorithms. The one chosen in this paper is the classifier genetic algorithm, that would produce the best result. The time and space complexities are computed in order to compare the proposed technique with the standard one for the purpose of getting the best results.
{"title":"Using genetic algorithm to improve the performance of multi-host vulnerability checkers","authors":"A. Mohamed","doi":"10.1109/EURCON.2001.938145","DOIUrl":"https://doi.org/10.1109/EURCON.2001.938145","url":null,"abstract":"This work is based on the reconfiguration of the NetKuang to improve its performance. It operates on computer networks employing the UNIX OS. It detects vulnerabilities in poor system configurations. So this work is not only capable of searching a large number of hosts in parallel, but it also considers potential configuration vulnerabilities present in the network. The main disadvantage of NetKuang is that it can only develop one vulnerability at a time on a given system. Furthermore, there is a leak in memory when running a task. Our work aims at developing more than one vulnerability at a time. Vulnerabilities are discovered using a backward goal-based technique. That is, inducing a different search technique based on a genetic algorithm to discover the vulnerabilities. We aim at using genetic algorithms to point out several vulnerabilities simultaneously. Moreover, our technique overcomes most of the previously mentioned disadvantages produced by the standard technique. Considering the genetic algorithm, there are two different ways that the search would apply: the simple genetic algorithms, and the classifier genetic algorithms. The one chosen in this paper is the classifier genetic algorithm, that would produce the best result. The time and space complexities are computed in order to compare the proposed technique with the standard one for the purpose of getting the best results.","PeriodicalId":205662,"journal":{"name":"EUROCON'2001. International Conference on Trends in Communications. Technical Program, Proceedings (Cat. No.01EX439)","volume":"128 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128480138","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2001-07-04DOI: 10.1109/EURCON.2001.938123
L. Pauk, Z. Skvor
A new approach suitable for determination of the maximal stable time step for the finite-difference time domain (FDTD) algorithm in curvilinear coordinates is presented. It is based on a modified variable separation method, applied to the set of difference equations of the FDTD algorithm. Investigation is carried out in spherical and cylindrical coordinates. A simple yet accurate enough approximative formula for cylindrical coordinates is presented. Applied to Cartesian coordinates, this approach yields the well-known Courrant condition.
{"title":"Stability of FDTD in curvilinear coordinates","authors":"L. Pauk, Z. Skvor","doi":"10.1109/EURCON.2001.938123","DOIUrl":"https://doi.org/10.1109/EURCON.2001.938123","url":null,"abstract":"A new approach suitable for determination of the maximal stable time step for the finite-difference time domain (FDTD) algorithm in curvilinear coordinates is presented. It is based on a modified variable separation method, applied to the set of difference equations of the FDTD algorithm. Investigation is carried out in spherical and cylindrical coordinates. A simple yet accurate enough approximative formula for cylindrical coordinates is presented. Applied to Cartesian coordinates, this approach yields the well-known Courrant condition.","PeriodicalId":205662,"journal":{"name":"EUROCON'2001. International Conference on Trends in Communications. Technical Program, Proceedings (Cat. No.01EX439)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131255590","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2001-07-04DOI: 10.1109/EURCON.2001.938150
K. Klaric, K. Pudar, J. Puksec
The introduction of new services, flexible charging in integrated service digital network (ISDN), and call analysis based on Petri nets is described. By obtaining the Petri net simulation and results analysis in early design phases for formal verification, some potential deadlocks and possible conflict activities can be discovered and avoided. Simulation and analysis were performed using the DaNAMiCS tool.
{"title":"ISDN call analysis by using Petri net model","authors":"K. Klaric, K. Pudar, J. Puksec","doi":"10.1109/EURCON.2001.938150","DOIUrl":"https://doi.org/10.1109/EURCON.2001.938150","url":null,"abstract":"The introduction of new services, flexible charging in integrated service digital network (ISDN), and call analysis based on Petri nets is described. By obtaining the Petri net simulation and results analysis in early design phases for formal verification, some potential deadlocks and possible conflict activities can be discovered and avoided. Simulation and analysis were performed using the DaNAMiCS tool.","PeriodicalId":205662,"journal":{"name":"EUROCON'2001. International Conference on Trends in Communications. Technical Program, Proceedings (Cat. No.01EX439)","volume":"100 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126972797","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2001-07-04DOI: 10.1109/EURCON.2001.938149
M. Divjak, M. Marolt
The paper presents the advantages of using JavaScript functions in interaction with scriptable applets. The concept was tested on the JxyZET simulation applet which was developed as a platform-independent complement to xyZET. The latter is an authoring tool for the development and visualisation of various phenomena in physics. The original JxyZET was expanded with some scriptable functions. This permits the definition of more complex elastic bodies built from several hundreds or even thousands of particles and springs. These bodies can be easily modified just by changing some parameters within the JavaScript code included in the parent hypertext. The simulation of complex dynamic systems is investigated The introduction of libraries of reusable JavaScript functions was proposed. The usage of JavaScript permits the interaction of a user-defined algorithm with the simulation tool, in our case JxyZET. The concept of dynamically conditioned simulation runs is explained and the possibilities of dynamically restructured simulation models are demonstrated.
{"title":"Flexibility of JavaScript controled simulations","authors":"M. Divjak, M. Marolt","doi":"10.1109/EURCON.2001.938149","DOIUrl":"https://doi.org/10.1109/EURCON.2001.938149","url":null,"abstract":"The paper presents the advantages of using JavaScript functions in interaction with scriptable applets. The concept was tested on the JxyZET simulation applet which was developed as a platform-independent complement to xyZET. The latter is an authoring tool for the development and visualisation of various phenomena in physics. The original JxyZET was expanded with some scriptable functions. This permits the definition of more complex elastic bodies built from several hundreds or even thousands of particles and springs. These bodies can be easily modified just by changing some parameters within the JavaScript code included in the parent hypertext. The simulation of complex dynamic systems is investigated The introduction of libraries of reusable JavaScript functions was proposed. The usage of JavaScript permits the interaction of a user-defined algorithm with the simulation tool, in our case JxyZET. The concept of dynamically conditioned simulation runs is explained and the possibilities of dynamically restructured simulation models are demonstrated.","PeriodicalId":205662,"journal":{"name":"EUROCON'2001. International Conference on Trends in Communications. Technical Program, Proceedings (Cat. No.01EX439)","volume":"78 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114548600","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2001-07-04DOI: 10.1109/EURCON.2001.938121
D. Courivaud, C. Humbert, M. Sylvain
Indoor propagation embraces various situations differing by the geometry of the buildings and their constituent materials. If the effects on propagation of these various parameters can be efficiently studied by a simulation approach, for instance by ray-tracing methods, it is necessary to assess the results of such studies by as much experimental data as possible collected in various types of environment.
{"title":"2 GHz single-floor indoor propagation results","authors":"D. Courivaud, C. Humbert, M. Sylvain","doi":"10.1109/EURCON.2001.938121","DOIUrl":"https://doi.org/10.1109/EURCON.2001.938121","url":null,"abstract":"Indoor propagation embraces various situations differing by the geometry of the buildings and their constituent materials. If the effects on propagation of these various parameters can be efficiently studied by a simulation approach, for instance by ray-tracing methods, it is necessary to assess the results of such studies by as much experimental data as possible collected in various types of environment.","PeriodicalId":205662,"journal":{"name":"EUROCON'2001. International Conference on Trends in Communications. Technical Program, Proceedings (Cat. No.01EX439)","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115557669","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2001-07-04DOI: 10.1109/EURCON.2001.937769
S. Ghofrani, M. Jahed-Motlagh, A. Ayatollahi
Using a good statistical model of speckle formation is important in designing an adaptive filter for speckle reduction in ultrasound B-scan images. Most clinical ultrasound imaging systems use a nonlinear logarithmic function to reduce the dynamic range of the the input echo signal and emphasize objects with weak backscatter. Previously, the statistic of log-compressed images had been derived for Rayleigh and K distributions. In this paper, the statistics of log-compressed echo images is derived for a Nakagami distribution, more general than Rayleigh and with lower computational cost than K distribution, and used the extracted result for designing an unsharp masking filter to reduce speckle. To demonstrate the efficiency of the designed adaptive filter for removing speckle, we processed two original ultrasound images of kidney and liver.
{"title":"An adaptive speckle suppression filter based on Nakagami distribution","authors":"S. Ghofrani, M. Jahed-Motlagh, A. Ayatollahi","doi":"10.1109/EURCON.2001.937769","DOIUrl":"https://doi.org/10.1109/EURCON.2001.937769","url":null,"abstract":"Using a good statistical model of speckle formation is important in designing an adaptive filter for speckle reduction in ultrasound B-scan images. Most clinical ultrasound imaging systems use a nonlinear logarithmic function to reduce the dynamic range of the the input echo signal and emphasize objects with weak backscatter. Previously, the statistic of log-compressed images had been derived for Rayleigh and K distributions. In this paper, the statistics of log-compressed echo images is derived for a Nakagami distribution, more general than Rayleigh and with lower computational cost than K distribution, and used the extracted result for designing an unsharp masking filter to reduce speckle. To demonstrate the efficiency of the designed adaptive filter for removing speckle, we processed two original ultrasound images of kidney and liver.","PeriodicalId":205662,"journal":{"name":"EUROCON'2001. International Conference on Trends in Communications. Technical Program, Proceedings (Cat. No.01EX439)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129666055","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2001-07-04DOI: 10.1109/EURCON.2001.937773
G. Triantafyllidis, D. Tzovaras, M. Strintzis
An efficient technique based on the Bayes test is introduced, for detecting occlusions in a noisy stereoscopic image pair. Two hypotheses are used for the formulation of the Bayes decision rules: the first describes the displaced areas while the other describes the occlusion areas. Experimental results illustrating the performance of the proposed techniques are presented and evaluated.
{"title":"Occlusion detection in stereopairs","authors":"G. Triantafyllidis, D. Tzovaras, M. Strintzis","doi":"10.1109/EURCON.2001.937773","DOIUrl":"https://doi.org/10.1109/EURCON.2001.937773","url":null,"abstract":"An efficient technique based on the Bayes test is introduced, for detecting occlusions in a noisy stereoscopic image pair. Two hypotheses are used for the formulation of the Bayes decision rules: the first describes the displaced areas while the other describes the occlusion areas. Experimental results illustrating the performance of the proposed techniques are presented and evaluated.","PeriodicalId":205662,"journal":{"name":"EUROCON'2001. International Conference on Trends in Communications. Technical Program, Proceedings (Cat. No.01EX439)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130297932","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2001-07-04DOI: 10.1109/EURCON.2001.937787
G. Baudoin, P. Jardin
This paper presents a new adaptive pre-distortion algorithm for linearization of power amplifiers and its application to non-constant envelope modulations such as QAM or OFDM. The pre-distortion system is polynomial. The criterion is minimization of a mean square error criterion between the baseband-equivalent of the output of the real amplifier and the ideally amplified signal. The analytic expression of the gradient of the criterion has been calculated. A stochastic gradient algorithm is applied using this analytic expression. A special normalization of the coefficients of the polynomial predistortion has been proposed to improve the speed of convergence. The method has been tested on a class AB power amplifier with a baseband signal corresponding to filtered QPSK and OFDM modulations.
{"title":"Adaptive polynomial pre-distortion for linearization of power amplifiers in wireless communications and WLAN","authors":"G. Baudoin, P. Jardin","doi":"10.1109/EURCON.2001.937787","DOIUrl":"https://doi.org/10.1109/EURCON.2001.937787","url":null,"abstract":"This paper presents a new adaptive pre-distortion algorithm for linearization of power amplifiers and its application to non-constant envelope modulations such as QAM or OFDM. The pre-distortion system is polynomial. The criterion is minimization of a mean square error criterion between the baseband-equivalent of the output of the real amplifier and the ideally amplified signal. The analytic expression of the gradient of the criterion has been calculated. A stochastic gradient algorithm is applied using this analytic expression. A special normalization of the coefficients of the polynomial predistortion has been proposed to improve the speed of convergence. The method has been tested on a class AB power amplifier with a baseband signal corresponding to filtered QPSK and OFDM modulations.","PeriodicalId":205662,"journal":{"name":"EUROCON'2001. International Conference on Trends in Communications. Technical Program, Proceedings (Cat. No.01EX439)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129950467","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}