Pub Date : 1999-08-22DOI: 10.1109/PACRIM.1999.799486
A.K. Murad Agha, R. Ward, S. Zahir
Precise image expansion techniques are required for several applications in image processing field. These include reconnaissance photography, cartography, medical images, and satellite imagery. Several methods have been employed for this purpose: (1) pixel replication; (2) area sizing; and (3) interpolation and spline methods. All these methods generate distortion and noticeable degradation in the quality of the image especially on and near edges. We introduce a segmentation-based method that produces significantly improved expanded images and maintains high quality edges. This method segments the image into nonstationary regions and homogenous regions and then expands them separately and via different procedures. Nonstationary regions are expanded using an elaborate look-ahead-and-back procedure. Homogenous regions are expanded using an expanded linear prediction approach. The experimental simulation results show that the expanded images are aesthetically and objectively better than those of other methods.
{"title":"Image expansion using segmentation-based method","authors":"A.K. Murad Agha, R. Ward, S. Zahir","doi":"10.1109/PACRIM.1999.799486","DOIUrl":"https://doi.org/10.1109/PACRIM.1999.799486","url":null,"abstract":"Precise image expansion techniques are required for several applications in image processing field. These include reconnaissance photography, cartography, medical images, and satellite imagery. Several methods have been employed for this purpose: (1) pixel replication; (2) area sizing; and (3) interpolation and spline methods. All these methods generate distortion and noticeable degradation in the quality of the image especially on and near edges. We introduce a segmentation-based method that produces significantly improved expanded images and maintains high quality edges. This method segments the image into nonstationary regions and homogenous regions and then expands them separately and via different procedures. Nonstationary regions are expanded using an elaborate look-ahead-and-back procedure. Homogenous regions are expanded using an expanded linear prediction approach. The experimental simulation results show that the expanded images are aesthetically and objectively better than those of other methods.","PeriodicalId":176763,"journal":{"name":"1999 IEEE Pacific Rim Conference on Communications, Computers and Signal Processing (PACRIM 1999). Conference Proceedings (Cat. No.99CH36368)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124183536","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-08-22DOI: 10.1109/PACRIM.1999.799554
J. Dahmer, R. Foley
With the Year 2000 approaching, there is a massive amount of code review and modification going on within industry. One aspect of this millennium problem is computer systems that have used a two digit rendering of the year when interpreting dates. Since the Year 2000 is a fixed deadline that cannot be moved, time is of the essence for businesses in determining if their software has any date logic within them that is incorrect. This means that any utilities that can be implemented to reduce the amount of time software maintainers spend on comprehension and modification of programs would be of benefit. Since reverse engineering techniques can be used to identify components and interrelationships within software, as well as build up high level abstractions from low level code details, it is possible to determine through practical example whether reverse engineering tools and techniques could serve a role in identifying computer software date code problems for the Year 2000.
{"title":"Reverse engineering tools in the reporting and analysis of the Year 2000 date problem","authors":"J. Dahmer, R. Foley","doi":"10.1109/PACRIM.1999.799554","DOIUrl":"https://doi.org/10.1109/PACRIM.1999.799554","url":null,"abstract":"With the Year 2000 approaching, there is a massive amount of code review and modification going on within industry. One aspect of this millennium problem is computer systems that have used a two digit rendering of the year when interpreting dates. Since the Year 2000 is a fixed deadline that cannot be moved, time is of the essence for businesses in determining if their software has any date logic within them that is incorrect. This means that any utilities that can be implemented to reduce the amount of time software maintainers spend on comprehension and modification of programs would be of benefit. Since reverse engineering techniques can be used to identify components and interrelationships within software, as well as build up high level abstractions from low level code details, it is possible to determine through practical example whether reverse engineering tools and techniques could serve a role in identifying computer software date code problems for the Year 2000.","PeriodicalId":176763,"journal":{"name":"1999 IEEE Pacific Rim Conference on Communications, Computers and Signal Processing (PACRIM 1999). Conference Proceedings (Cat. No.99CH36368)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124532675","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-08-22DOI: 10.1109/PACRIM.1999.799522
A. Dinh, R. Bolton, R. Mason, R. Palmer
This paper presents the hardware implementation of a high-speed transceiver to be used in a multi-channel multi-point distribution system (MMDS). Based on standards specifications, various building blocks are implemented using FPGA prototypes. It has been found that data integrity protection is expensive to implement, namely the forward error correction scheme in the transceiver. This includes Reed-Solomon codec and byte interleaving to correct both random and burst errors causing by the channel. Results show a data rate of 80 Mbit/s can be achieved using FPGA prototypes. Higher data rates are expected when final ASICs are developed.
{"title":"Multi-channel multi-point distribution service system transceiver implementation","authors":"A. Dinh, R. Bolton, R. Mason, R. Palmer","doi":"10.1109/PACRIM.1999.799522","DOIUrl":"https://doi.org/10.1109/PACRIM.1999.799522","url":null,"abstract":"This paper presents the hardware implementation of a high-speed transceiver to be used in a multi-channel multi-point distribution system (MMDS). Based on standards specifications, various building blocks are implemented using FPGA prototypes. It has been found that data integrity protection is expensive to implement, namely the forward error correction scheme in the transceiver. This includes Reed-Solomon codec and byte interleaving to correct both random and burst errors causing by the channel. Results show a data rate of 80 Mbit/s can be achieved using FPGA prototypes. Higher data rates are expected when final ASICs are developed.","PeriodicalId":176763,"journal":{"name":"1999 IEEE Pacific Rim Conference on Communications, Computers and Signal Processing (PACRIM 1999). Conference Proceedings (Cat. No.99CH36368)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125598724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-08-22DOI: 10.1109/PACRIM.1999.799556
Mitchell A. Thornton, J. P. Williams, Rolf Drechsler, Nicole Drechsler, D. M. Wessels
Modern CAD tools must represent large Boolean functions compactly in order to obtain reasonable runtimes for synthesis and verification. The shared binary decision diagram (SBDD) with negative edge attributes can represent many functions in a compact form if a proper variable ordering is used. In this work we describe a technique for reordering the variables in an SBDD to reduce the size of the data structure. A common heuristic for the variable ordering problem is to group variables together that have similar characteristics. We use this heuristic to formulate a technique for the reordering problem using probability based metrics. Our results indicate that this technique outperforms sifting with comparable runtimes. Furthermore, the method is robust in that the final results independent of the initial structure of the SBDD.
{"title":"SBDD variable reordering based on probabilistic and evolutionary algorithms","authors":"Mitchell A. Thornton, J. P. Williams, Rolf Drechsler, Nicole Drechsler, D. M. Wessels","doi":"10.1109/PACRIM.1999.799556","DOIUrl":"https://doi.org/10.1109/PACRIM.1999.799556","url":null,"abstract":"Modern CAD tools must represent large Boolean functions compactly in order to obtain reasonable runtimes for synthesis and verification. The shared binary decision diagram (SBDD) with negative edge attributes can represent many functions in a compact form if a proper variable ordering is used. In this work we describe a technique for reordering the variables in an SBDD to reduce the size of the data structure. A common heuristic for the variable ordering problem is to group variables together that have similar characteristics. We use this heuristic to formulate a technique for the reordering problem using probability based metrics. Our results indicate that this technique outperforms sifting with comparable runtimes. Furthermore, the method is robust in that the final results independent of the initial structure of the SBDD.","PeriodicalId":176763,"journal":{"name":"1999 IEEE Pacific Rim Conference on Communications, Computers and Signal Processing (PACRIM 1999). Conference Proceedings (Cat. No.99CH36368)","volume":"368 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121735214","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-08-22DOI: 10.1109/PACRIM.1999.799597
K. Katagishi, K. Toraichi, S. Hattori, Seng Luan Lee, K. Nakamura
Sampling functions are used to reconstruct an analog signal from a set of sampled values obtained by digitizing the signal in the field of multimedia communications such as audio processing, image processing, and so on. However, conventional sampling functions are not useful for reconstructing television signals with high-speed processing, because they are not compactly supported. This paper proposes practical new sampling functions which are compactly supported.
{"title":"Practical compactly supported sampling functions of degree 2","authors":"K. Katagishi, K. Toraichi, S. Hattori, Seng Luan Lee, K. Nakamura","doi":"10.1109/PACRIM.1999.799597","DOIUrl":"https://doi.org/10.1109/PACRIM.1999.799597","url":null,"abstract":"Sampling functions are used to reconstruct an analog signal from a set of sampled values obtained by digitizing the signal in the field of multimedia communications such as audio processing, image processing, and so on. However, conventional sampling functions are not useful for reconstructing television signals with high-speed processing, because they are not compactly supported. This paper proposes practical new sampling functions which are compactly supported.","PeriodicalId":176763,"journal":{"name":"1999 IEEE Pacific Rim Conference on Communications, Computers and Signal Processing (PACRIM 1999). Conference Proceedings (Cat. No.99CH36368)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131359199","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-08-22DOI: 10.1109/PACRIM.1999.799533
M. Ancis, D. Giusto
The paper deals with error concealment in block-coded image transmission over noisy channels. In particular, it proposes a novel algorithm for missing block reconstruction in the frequency domain. In fact, damaged blocks are recovered by interpolating the DCT coefficients of available neighboring blocks. Coefficient interpolation is investigated in four different variants; median and edge-based interpolation are chosen for their capabilities in high-quality reconstruction. Experimental results show a good performance in homogeneous and textured regions, as well as in blocks containing edges.
{"title":"Reconstruction of missing blocks in JPEG picture transmission","authors":"M. Ancis, D. Giusto","doi":"10.1109/PACRIM.1999.799533","DOIUrl":"https://doi.org/10.1109/PACRIM.1999.799533","url":null,"abstract":"The paper deals with error concealment in block-coded image transmission over noisy channels. In particular, it proposes a novel algorithm for missing block reconstruction in the frequency domain. In fact, damaged blocks are recovered by interpolating the DCT coefficients of available neighboring blocks. Coefficient interpolation is investigated in four different variants; median and edge-based interpolation are chosen for their capabilities in high-quality reconstruction. Experimental results show a good performance in homogeneous and textured regions, as well as in blocks containing edges.","PeriodicalId":176763,"journal":{"name":"1999 IEEE Pacific Rim Conference on Communications, Computers and Signal Processing (PACRIM 1999). Conference Proceedings (Cat. No.99CH36368)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132028389","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-08-22DOI: 10.1109/PACRIM.1999.799602
X.F. Wang, W. Lu, A. Antoniou
A new optimization criterion for the design of a time-domain equalizer in asymmetric digital subscriber line systems is studied. It is shown that the new criterion maximizes the signal-to-noise ratio regardless of the profile of bandwidth occupancy. Based on the fact that the solution of the optimization problem is unique, two adaptation algorithms are developed in the time and frequency domains, respectively. The computational complexities of these two algorithms are comparable to that of the conventional least-mean-square algorithm. Simulation results show that the two algorithms converge at a satisfactory speed but the frequency-domain algorithm offers a slightly faster convergence and a smaller excess mean-square error.
{"title":"Adaptive equalization for partially bandwidth-occupied ADSL transceivers","authors":"X.F. Wang, W. Lu, A. Antoniou","doi":"10.1109/PACRIM.1999.799602","DOIUrl":"https://doi.org/10.1109/PACRIM.1999.799602","url":null,"abstract":"A new optimization criterion for the design of a time-domain equalizer in asymmetric digital subscriber line systems is studied. It is shown that the new criterion maximizes the signal-to-noise ratio regardless of the profile of bandwidth occupancy. Based on the fact that the solution of the optimization problem is unique, two adaptation algorithms are developed in the time and frequency domains, respectively. The computational complexities of these two algorithms are comparable to that of the conventional least-mean-square algorithm. Simulation results show that the two algorithms converge at a satisfactory speed but the frequency-domain algorithm offers a slightly faster convergence and a smaller excess mean-square error.","PeriodicalId":176763,"journal":{"name":"1999 IEEE Pacific Rim Conference on Communications, Computers and Signal Processing (PACRIM 1999). Conference Proceedings (Cat. No.99CH36368)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131310824","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-08-22DOI: 10.1109/PACRIM.1999.799607
A.P. Schoorl, N. Dimopoulos
As wireless and ubiquitous computing become increasingly affordable and widespread, traditional client-server models for distributing data fail to offer the flexibility needed in mobile computing environments. Although systems have been proposed to address these concerns, most rely on changes to existing infrastructures. The article describes a server hierarchy that uses currently available resources which alleviate some of the common problems associated with data mining from mobile hosts. Although designed for retrieving stored status monitoring information and topology of cable television amplifier networks, the proposed system is general enough to be used for disseminating arbitrary data across a computer network. Client mobility and fault tolerance, if required, are handled through the use of object serialization and intermediate agents.
{"title":"Client mobility and fault tolerance in a distributed network data system","authors":"A.P. Schoorl, N. Dimopoulos","doi":"10.1109/PACRIM.1999.799607","DOIUrl":"https://doi.org/10.1109/PACRIM.1999.799607","url":null,"abstract":"As wireless and ubiquitous computing become increasingly affordable and widespread, traditional client-server models for distributing data fail to offer the flexibility needed in mobile computing environments. Although systems have been proposed to address these concerns, most rely on changes to existing infrastructures. The article describes a server hierarchy that uses currently available resources which alleviate some of the common problems associated with data mining from mobile hosts. Although designed for retrieving stored status monitoring information and topology of cable television amplifier networks, the proposed system is general enough to be used for disseminating arbitrary data across a computer network. Client mobility and fault tolerance, if required, are handled through the use of object serialization and intermediate agents.","PeriodicalId":176763,"journal":{"name":"1999 IEEE Pacific Rim Conference on Communications, Computers and Signal Processing (PACRIM 1999). Conference Proceedings (Cat. No.99CH36368)","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115733795","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-08-22DOI: 10.1109/PACRIM.1999.799604
Y. Kim, Y. Hahm, Hye Jung Jung, I. Song
In this paper, by modifying a conventional method which requires two training symbols, we propose a timing and frequency synchronization algorithm for OFDM systems which requires one training symbol. While the frame/symbol timing is obtained by using the conventional method, the carrier frequency offset is efficiently estimated by the proposed method. Key features of the proposed method are presented in terms of missing probability and estimation error variance of the carrier frequency offset estimator in AWGN and frequency selective fading channels. It is shown that the proposed method not only reduces the number of training symbols but also possesses better performance than the conventional method without increased complexity.
{"title":"An efficient frequency offset estimator for timing and frequency synchronization in OFDM systems","authors":"Y. Kim, Y. Hahm, Hye Jung Jung, I. Song","doi":"10.1109/PACRIM.1999.799604","DOIUrl":"https://doi.org/10.1109/PACRIM.1999.799604","url":null,"abstract":"In this paper, by modifying a conventional method which requires two training symbols, we propose a timing and frequency synchronization algorithm for OFDM systems which requires one training symbol. While the frame/symbol timing is obtained by using the conventional method, the carrier frequency offset is efficiently estimated by the proposed method. Key features of the proposed method are presented in terms of missing probability and estimation error variance of the carrier frequency offset estimator in AWGN and frequency selective fading channels. It is shown that the proposed method not only reduces the number of training symbols but also possesses better performance than the conventional method without increased complexity.","PeriodicalId":176763,"journal":{"name":"1999 IEEE Pacific Rim Conference on Communications, Computers and Signal Processing (PACRIM 1999). Conference Proceedings (Cat. No.99CH36368)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130062767","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-08-22DOI: 10.1109/PACRIM.1999.799512
M. Ohira, K. Mori, K. Wada, K. Traichi
We propose a method for enlarging images with high quality. The method is based on gray level interpolation, and instead of employing the general sampling function, it uses a two-dimensional sampling function, which is generated from a more appropriate function for gray level interpolation. One of the largest problems we face upon image enlargement is the exaggeration of the jagged edges. To deal with this problem, we first search for the edges and detect their direction. The two-dimensional sampling function is then transformed along the direction of these detected edges. To test for its effectiveness, the proposed method is implemented and is applied to actual image data.
{"title":"High quality image restoration by adaptively transformed sampling function","authors":"M. Ohira, K. Mori, K. Wada, K. Traichi","doi":"10.1109/PACRIM.1999.799512","DOIUrl":"https://doi.org/10.1109/PACRIM.1999.799512","url":null,"abstract":"We propose a method for enlarging images with high quality. The method is based on gray level interpolation, and instead of employing the general sampling function, it uses a two-dimensional sampling function, which is generated from a more appropriate function for gray level interpolation. One of the largest problems we face upon image enlargement is the exaggeration of the jagged edges. To deal with this problem, we first search for the edges and detect their direction. The two-dimensional sampling function is then transformed along the direction of these detected edges. To test for its effectiveness, the proposed method is implemented and is applied to actual image data.","PeriodicalId":176763,"journal":{"name":"1999 IEEE Pacific Rim Conference on Communications, Computers and Signal Processing (PACRIM 1999). Conference Proceedings (Cat. No.99CH36368)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125489778","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}