首页 > 最新文献

1999 IEEE Pacific Rim Conference on Communications, Computers and Signal Processing (PACRIM 1999). Conference Proceedings (Cat. No.99CH36368)最新文献

英文 中文
Image expansion using segmentation-based method 基于分割的图像扩展方法
A.K. Murad Agha, R. Ward, S. Zahir
Precise image expansion techniques are required for several applications in image processing field. These include reconnaissance photography, cartography, medical images, and satellite imagery. Several methods have been employed for this purpose: (1) pixel replication; (2) area sizing; and (3) interpolation and spline methods. All these methods generate distortion and noticeable degradation in the quality of the image especially on and near edges. We introduce a segmentation-based method that produces significantly improved expanded images and maintains high quality edges. This method segments the image into nonstationary regions and homogenous regions and then expands them separately and via different procedures. Nonstationary regions are expanded using an elaborate look-ahead-and-back procedure. Homogenous regions are expanded using an expanded linear prediction approach. The experimental simulation results show that the expanded images are aesthetically and objectively better than those of other methods.
在图像处理领域的许多应用都需要精确的图像展开技术。这些包括侦察摄影、制图、医学图像和卫星图像。为此目的采用了几种方法:(1)像素复制;(2)面积划分;(3)插值和样条法。所有这些方法都会产生失真和明显的图像质量下降,特别是在边缘和边缘附近。我们引入了一种基于分割的方法,该方法可以产生显著改进的扩展图像并保持高质量的边缘。该方法将图像分割为非平稳区域和均匀区域,然后分别通过不同的步骤进行扩展。非平稳区域的扩展使用一个复杂的向前和向后的过程。采用扩展线性预测方法对同质区域进行了扩展。实验仿真结果表明,扩展后的图像在美学和客观上都优于其他方法。
{"title":"Image expansion using segmentation-based method","authors":"A.K. Murad Agha, R. Ward, S. Zahir","doi":"10.1109/PACRIM.1999.799486","DOIUrl":"https://doi.org/10.1109/PACRIM.1999.799486","url":null,"abstract":"Precise image expansion techniques are required for several applications in image processing field. These include reconnaissance photography, cartography, medical images, and satellite imagery. Several methods have been employed for this purpose: (1) pixel replication; (2) area sizing; and (3) interpolation and spline methods. All these methods generate distortion and noticeable degradation in the quality of the image especially on and near edges. We introduce a segmentation-based method that produces significantly improved expanded images and maintains high quality edges. This method segments the image into nonstationary regions and homogenous regions and then expands them separately and via different procedures. Nonstationary regions are expanded using an elaborate look-ahead-and-back procedure. Homogenous regions are expanded using an expanded linear prediction approach. The experimental simulation results show that the expanded images are aesthetically and objectively better than those of other methods.","PeriodicalId":176763,"journal":{"name":"1999 IEEE Pacific Rim Conference on Communications, Computers and Signal Processing (PACRIM 1999). Conference Proceedings (Cat. No.99CH36368)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124183536","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Reverse engineering tools in the reporting and analysis of the Year 2000 date problem 逆向工程工具在报告和分析2000年日期问题
J. Dahmer, R. Foley
With the Year 2000 approaching, there is a massive amount of code review and modification going on within industry. One aspect of this millennium problem is computer systems that have used a two digit rendering of the year when interpreting dates. Since the Year 2000 is a fixed deadline that cannot be moved, time is of the essence for businesses in determining if their software has any date logic within them that is incorrect. This means that any utilities that can be implemented to reduce the amount of time software maintainers spend on comprehension and modification of programs would be of benefit. Since reverse engineering techniques can be used to identify components and interrelationships within software, as well as build up high level abstractions from low level code details, it is possible to determine through practical example whether reverse engineering tools and techniques could serve a role in identifying computer software date code problems for the Year 2000.
随着2000年的临近,业界正在进行大量的代码审查和修改。这个千年问题的一个方面是计算机系统在解释日期时使用两位数表示年份。由于2000年是一个不可更改的固定期限,因此对于企业来说,在确定其软件中是否存在不正确的日期逻辑时,时间是至关重要的。这意味着任何能够减少软件维护者花在理解和修改程序上的时间的实用程序都是有益的。由于逆向工程技术可用于识别软件内的组件和相互关系,以及从低级代码细节建立高级抽象,因此有可能通过实际例子确定逆向工程工具和技术是否可在识别2000年计算机软件日期代码问题方面发挥作用。
{"title":"Reverse engineering tools in the reporting and analysis of the Year 2000 date problem","authors":"J. Dahmer, R. Foley","doi":"10.1109/PACRIM.1999.799554","DOIUrl":"https://doi.org/10.1109/PACRIM.1999.799554","url":null,"abstract":"With the Year 2000 approaching, there is a massive amount of code review and modification going on within industry. One aspect of this millennium problem is computer systems that have used a two digit rendering of the year when interpreting dates. Since the Year 2000 is a fixed deadline that cannot be moved, time is of the essence for businesses in determining if their software has any date logic within them that is incorrect. This means that any utilities that can be implemented to reduce the amount of time software maintainers spend on comprehension and modification of programs would be of benefit. Since reverse engineering techniques can be used to identify components and interrelationships within software, as well as build up high level abstractions from low level code details, it is possible to determine through practical example whether reverse engineering tools and techniques could serve a role in identifying computer software date code problems for the Year 2000.","PeriodicalId":176763,"journal":{"name":"1999 IEEE Pacific Rim Conference on Communications, Computers and Signal Processing (PACRIM 1999). Conference Proceedings (Cat. No.99CH36368)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124532675","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-channel multi-point distribution service system transceiver implementation 多通道多点分配业务系统收发器实现
A. Dinh, R. Bolton, R. Mason, R. Palmer
This paper presents the hardware implementation of a high-speed transceiver to be used in a multi-channel multi-point distribution system (MMDS). Based on standards specifications, various building blocks are implemented using FPGA prototypes. It has been found that data integrity protection is expensive to implement, namely the forward error correction scheme in the transceiver. This includes Reed-Solomon codec and byte interleaving to correct both random and burst errors causing by the channel. Results show a data rate of 80 Mbit/s can be achieved using FPGA prototypes. Higher data rates are expected when final ASICs are developed.
本文介绍了一种用于多通道多点配电系统的高速收发器的硬件实现。基于标准规范,使用FPGA原型实现各种构建块。已经发现数据完整性保护的实现成本很高,即收发器中的前向纠错方案。这包括里德-所罗门编解码器和字节交错,以纠正由信道引起的随机和突发错误。结果表明,使用FPGA原型可以实现80 Mbit/s的数据速率。当最终的asic被开发出来时,预计会有更高的数据速率。
{"title":"Multi-channel multi-point distribution service system transceiver implementation","authors":"A. Dinh, R. Bolton, R. Mason, R. Palmer","doi":"10.1109/PACRIM.1999.799522","DOIUrl":"https://doi.org/10.1109/PACRIM.1999.799522","url":null,"abstract":"This paper presents the hardware implementation of a high-speed transceiver to be used in a multi-channel multi-point distribution system (MMDS). Based on standards specifications, various building blocks are implemented using FPGA prototypes. It has been found that data integrity protection is expensive to implement, namely the forward error correction scheme in the transceiver. This includes Reed-Solomon codec and byte interleaving to correct both random and burst errors causing by the channel. Results show a data rate of 80 Mbit/s can be achieved using FPGA prototypes. Higher data rates are expected when final ASICs are developed.","PeriodicalId":176763,"journal":{"name":"1999 IEEE Pacific Rim Conference on Communications, Computers and Signal Processing (PACRIM 1999). Conference Proceedings (Cat. No.99CH36368)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125598724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
SBDD variable reordering based on probabilistic and evolutionary algorithms 基于概率和进化算法的sdd变量重排序
Mitchell A. Thornton, J. P. Williams, Rolf Drechsler, Nicole Drechsler, D. M. Wessels
Modern CAD tools must represent large Boolean functions compactly in order to obtain reasonable runtimes for synthesis and verification. The shared binary decision diagram (SBDD) with negative edge attributes can represent many functions in a compact form if a proper variable ordering is used. In this work we describe a technique for reordering the variables in an SBDD to reduce the size of the data structure. A common heuristic for the variable ordering problem is to group variables together that have similar characteristics. We use this heuristic to formulate a technique for the reordering problem using probability based metrics. Our results indicate that this technique outperforms sifting with comparable runtimes. Furthermore, the method is robust in that the final results independent of the initial structure of the SBDD.
现代CAD工具必须紧凑地表示大型布尔函数,以获得合理的运行时间进行综合和验证。具有负边属性的共享二元决策图(SBDD),如果使用适当的变量排序,可以以紧凑的形式表示许多函数。在这项工作中,我们描述了一种在sdd中重新排序变量以减少数据结构大小的技术。对于变量排序问题,一个常见的启发式方法是将具有相似特征的变量分组在一起。我们使用这种启发式来制定一种使用基于概率的度量的重新排序问题的技术。我们的结果表明,这种技术优于筛选与可比的运行时间。此外,该方法具有鲁棒性,最终结果与SBDD的初始结构无关。
{"title":"SBDD variable reordering based on probabilistic and evolutionary algorithms","authors":"Mitchell A. Thornton, J. P. Williams, Rolf Drechsler, Nicole Drechsler, D. M. Wessels","doi":"10.1109/PACRIM.1999.799556","DOIUrl":"https://doi.org/10.1109/PACRIM.1999.799556","url":null,"abstract":"Modern CAD tools must represent large Boolean functions compactly in order to obtain reasonable runtimes for synthesis and verification. The shared binary decision diagram (SBDD) with negative edge attributes can represent many functions in a compact form if a proper variable ordering is used. In this work we describe a technique for reordering the variables in an SBDD to reduce the size of the data structure. A common heuristic for the variable ordering problem is to group variables together that have similar characteristics. We use this heuristic to formulate a technique for the reordering problem using probability based metrics. Our results indicate that this technique outperforms sifting with comparable runtimes. Furthermore, the method is robust in that the final results independent of the initial structure of the SBDD.","PeriodicalId":176763,"journal":{"name":"1999 IEEE Pacific Rim Conference on Communications, Computers and Signal Processing (PACRIM 1999). Conference Proceedings (Cat. No.99CH36368)","volume":"368 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121735214","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Practical compactly supported sampling functions of degree 2 实用的紧支持2度抽样函数
K. Katagishi, K. Toraichi, S. Hattori, Seng Luan Lee, K. Nakamura
Sampling functions are used to reconstruct an analog signal from a set of sampled values obtained by digitizing the signal in the field of multimedia communications such as audio processing, image processing, and so on. However, conventional sampling functions are not useful for reconstructing television signals with high-speed processing, because they are not compactly supported. This paper proposes practical new sampling functions which are compactly supported.
在音频处理、图像处理等多媒体通信领域中,通过对信号进行数字化处理而得到的一组采样值,用来重构模拟信号的采样函数。然而,传统的采样函数对于高速处理的电视信号重构是无用的,因为它们没有得到紧凑的支持。本文提出了一种实用的新的紧支持采样函数。
{"title":"Practical compactly supported sampling functions of degree 2","authors":"K. Katagishi, K. Toraichi, S. Hattori, Seng Luan Lee, K. Nakamura","doi":"10.1109/PACRIM.1999.799597","DOIUrl":"https://doi.org/10.1109/PACRIM.1999.799597","url":null,"abstract":"Sampling functions are used to reconstruct an analog signal from a set of sampled values obtained by digitizing the signal in the field of multimedia communications such as audio processing, image processing, and so on. However, conventional sampling functions are not useful for reconstructing television signals with high-speed processing, because they are not compactly supported. This paper proposes practical new sampling functions which are compactly supported.","PeriodicalId":176763,"journal":{"name":"1999 IEEE Pacific Rim Conference on Communications, Computers and Signal Processing (PACRIM 1999). Conference Proceedings (Cat. No.99CH36368)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131359199","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Reconstruction of missing blocks in JPEG picture transmission JPEG图像传输中缺失块的重建
M. Ancis, D. Giusto
The paper deals with error concealment in block-coded image transmission over noisy channels. In particular, it proposes a novel algorithm for missing block reconstruction in the frequency domain. In fact, damaged blocks are recovered by interpolating the DCT coefficients of available neighboring blocks. Coefficient interpolation is investigated in four different variants; median and edge-based interpolation are chosen for their capabilities in high-quality reconstruction. Experimental results show a good performance in homogeneous and textured regions, as well as in blocks containing edges.
研究了噪声信道上的块编码图像传输中的错误隐藏问题。特别提出了一种新的频域缺失块重建算法。实际上,损坏的块是通过插值可用相邻块的DCT系数来恢复的。研究了四种不同变量下的系数插值;选择中值插值和基于边缘的插值是因为它们具有高质量重建的能力。实验结果表明,该算法在均匀区域和纹理区域以及包含边缘的块中都具有良好的性能。
{"title":"Reconstruction of missing blocks in JPEG picture transmission","authors":"M. Ancis, D. Giusto","doi":"10.1109/PACRIM.1999.799533","DOIUrl":"https://doi.org/10.1109/PACRIM.1999.799533","url":null,"abstract":"The paper deals with error concealment in block-coded image transmission over noisy channels. In particular, it proposes a novel algorithm for missing block reconstruction in the frequency domain. In fact, damaged blocks are recovered by interpolating the DCT coefficients of available neighboring blocks. Coefficient interpolation is investigated in four different variants; median and edge-based interpolation are chosen for their capabilities in high-quality reconstruction. Experimental results show a good performance in homogeneous and textured regions, as well as in blocks containing edges.","PeriodicalId":176763,"journal":{"name":"1999 IEEE Pacific Rim Conference on Communications, Computers and Signal Processing (PACRIM 1999). Conference Proceedings (Cat. No.99CH36368)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132028389","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 27
Adaptive equalization for partially bandwidth-occupied ADSL transceivers 部分占用带宽的ADSL收发器的自适应均衡
X.F. Wang, W. Lu, A. Antoniou
A new optimization criterion for the design of a time-domain equalizer in asymmetric digital subscriber line systems is studied. It is shown that the new criterion maximizes the signal-to-noise ratio regardless of the profile of bandwidth occupancy. Based on the fact that the solution of the optimization problem is unique, two adaptation algorithms are developed in the time and frequency domains, respectively. The computational complexities of these two algorithms are comparable to that of the conventional least-mean-square algorithm. Simulation results show that the two algorithms converge at a satisfactory speed but the frequency-domain algorithm offers a slightly faster convergence and a smaller excess mean-square error.
研究了非对称数字用户线路系统中时域均衡器设计的一种新的优化准则。结果表明,无论带宽占用情况如何,新准则都能使信噪比最大化。基于优化问题解的唯一性,分别在时域和频域开发了两种自适应算法。这两种算法的计算复杂度与传统的最小均方算法相当。仿真结果表明,两种算法的收敛速度都令人满意,但频域算法的收敛速度略快,且均方误差较小。
{"title":"Adaptive equalization for partially bandwidth-occupied ADSL transceivers","authors":"X.F. Wang, W. Lu, A. Antoniou","doi":"10.1109/PACRIM.1999.799602","DOIUrl":"https://doi.org/10.1109/PACRIM.1999.799602","url":null,"abstract":"A new optimization criterion for the design of a time-domain equalizer in asymmetric digital subscriber line systems is studied. It is shown that the new criterion maximizes the signal-to-noise ratio regardless of the profile of bandwidth occupancy. Based on the fact that the solution of the optimization problem is unique, two adaptation algorithms are developed in the time and frequency domains, respectively. The computational complexities of these two algorithms are comparable to that of the conventional least-mean-square algorithm. Simulation results show that the two algorithms converge at a satisfactory speed but the frequency-domain algorithm offers a slightly faster convergence and a smaller excess mean-square error.","PeriodicalId":176763,"journal":{"name":"1999 IEEE Pacific Rim Conference on Communications, Computers and Signal Processing (PACRIM 1999). Conference Proceedings (Cat. No.99CH36368)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131310824","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Client mobility and fault tolerance in a distributed network data system 分布式网络数据系统中的客户端移动性和容错性
A.P. Schoorl, N. Dimopoulos
As wireless and ubiquitous computing become increasingly affordable and widespread, traditional client-server models for distributing data fail to offer the flexibility needed in mobile computing environments. Although systems have been proposed to address these concerns, most rely on changes to existing infrastructures. The article describes a server hierarchy that uses currently available resources which alleviate some of the common problems associated with data mining from mobile hosts. Although designed for retrieving stored status monitoring information and topology of cable television amplifier networks, the proposed system is general enough to be used for disseminating arbitrary data across a computer network. Client mobility and fault tolerance, if required, are handled through the use of object serialization and intermediate agents.
随着无线和无处不在的计算变得越来越便宜和广泛,用于分发数据的传统客户机-服务器模型无法提供移动计算环境中所需的灵活性。虽然已经提出了解决这些问题的系统,但大多数都依赖于对现有基础设施的改变。本文描述了一个服务器层次结构,它使用当前可用的资源,从而减轻了与从移动主机进行数据挖掘相关的一些常见问题。虽然设计用于检索存储的状态监测信息和有线电视放大器网络的拓扑结构,但所提出的系统具有足够的通用性,可用于在计算机网络中传播任意数据。如果需要,则通过使用对象序列化和中间代理来处理客户机移动性和容错性。
{"title":"Client mobility and fault tolerance in a distributed network data system","authors":"A.P. Schoorl, N. Dimopoulos","doi":"10.1109/PACRIM.1999.799607","DOIUrl":"https://doi.org/10.1109/PACRIM.1999.799607","url":null,"abstract":"As wireless and ubiquitous computing become increasingly affordable and widespread, traditional client-server models for distributing data fail to offer the flexibility needed in mobile computing environments. Although systems have been proposed to address these concerns, most rely on changes to existing infrastructures. The article describes a server hierarchy that uses currently available resources which alleviate some of the common problems associated with data mining from mobile hosts. Although designed for retrieving stored status monitoring information and topology of cable television amplifier networks, the proposed system is general enough to be used for disseminating arbitrary data across a computer network. Client mobility and fault tolerance, if required, are handled through the use of object serialization and intermediate agents.","PeriodicalId":176763,"journal":{"name":"1999 IEEE Pacific Rim Conference on Communications, Computers and Signal Processing (PACRIM 1999). Conference Proceedings (Cat. No.99CH36368)","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115733795","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
An efficient frequency offset estimator for timing and frequency synchronization in OFDM systems 一种用于OFDM系统定时和频率同步的有效频偏估计器
Y. Kim, Y. Hahm, Hye Jung Jung, I. Song
In this paper, by modifying a conventional method which requires two training symbols, we propose a timing and frequency synchronization algorithm for OFDM systems which requires one training symbol. While the frame/symbol timing is obtained by using the conventional method, the carrier frequency offset is efficiently estimated by the proposed method. Key features of the proposed method are presented in terms of missing probability and estimation error variance of the carrier frequency offset estimator in AWGN and frequency selective fading channels. It is shown that the proposed method not only reduces the number of training symbols but also possesses better performance than the conventional method without increased complexity.
本文通过改进传统的需要两个训练符号的定时和频率同步算法,提出了一种只需要一个训练符号的OFDM系统定时和频率同步算法。在采用常规方法获取帧/符号时序的同时,该方法有效地估计了载波频偏。从AWGN和频率选择性衰落信道中载波频偏估计器的缺失概率和估计误差方差两个方面介绍了该方法的主要特点。结果表明,该方法不仅减少了训练符号的数量,而且在不增加复杂度的情况下具有比传统方法更好的性能。
{"title":"An efficient frequency offset estimator for timing and frequency synchronization in OFDM systems","authors":"Y. Kim, Y. Hahm, Hye Jung Jung, I. Song","doi":"10.1109/PACRIM.1999.799604","DOIUrl":"https://doi.org/10.1109/PACRIM.1999.799604","url":null,"abstract":"In this paper, by modifying a conventional method which requires two training symbols, we propose a timing and frequency synchronization algorithm for OFDM systems which requires one training symbol. While the frame/symbol timing is obtained by using the conventional method, the carrier frequency offset is efficiently estimated by the proposed method. Key features of the proposed method are presented in terms of missing probability and estimation error variance of the carrier frequency offset estimator in AWGN and frequency selective fading channels. It is shown that the proposed method not only reduces the number of training symbols but also possesses better performance than the conventional method without increased complexity.","PeriodicalId":176763,"journal":{"name":"1999 IEEE Pacific Rim Conference on Communications, Computers and Signal Processing (PACRIM 1999). Conference Proceedings (Cat. No.99CH36368)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130062767","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 32
High quality image restoration by adaptively transformed sampling function 采用自适应变换采样函数实现高质量图像恢复
M. Ohira, K. Mori, K. Wada, K. Traichi
We propose a method for enlarging images with high quality. The method is based on gray level interpolation, and instead of employing the general sampling function, it uses a two-dimensional sampling function, which is generated from a more appropriate function for gray level interpolation. One of the largest problems we face upon image enlargement is the exaggeration of the jagged edges. To deal with this problem, we first search for the edges and detect their direction. The two-dimensional sampling function is then transformed along the direction of these detected edges. To test for its effectiveness, the proposed method is implemented and is applied to actual image data.
提出了一种放大高质量图像的方法。该方法基于灰度插值,不使用一般的采样函数,而是使用由更适合灰度插值的函数生成的二维采样函数。我们在图像放大时面临的最大问题之一是锯齿边缘的夸张。为了解决这个问题,我们首先搜索边缘并检测它们的方向。然后沿着这些检测到的边缘的方向变换二维采样函数。为了验证该方法的有效性,将其应用于实际图像数据。
{"title":"High quality image restoration by adaptively transformed sampling function","authors":"M. Ohira, K. Mori, K. Wada, K. Traichi","doi":"10.1109/PACRIM.1999.799512","DOIUrl":"https://doi.org/10.1109/PACRIM.1999.799512","url":null,"abstract":"We propose a method for enlarging images with high quality. The method is based on gray level interpolation, and instead of employing the general sampling function, it uses a two-dimensional sampling function, which is generated from a more appropriate function for gray level interpolation. One of the largest problems we face upon image enlargement is the exaggeration of the jagged edges. To deal with this problem, we first search for the edges and detect their direction. The two-dimensional sampling function is then transformed along the direction of these detected edges. To test for its effectiveness, the proposed method is implemented and is applied to actual image data.","PeriodicalId":176763,"journal":{"name":"1999 IEEE Pacific Rim Conference on Communications, Computers and Signal Processing (PACRIM 1999). Conference Proceedings (Cat. No.99CH36368)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125489778","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
期刊
1999 IEEE Pacific Rim Conference on Communications, Computers and Signal Processing (PACRIM 1999). Conference Proceedings (Cat. No.99CH36368)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1