Pub Date : 2005-12-21DOI: 10.1109/ISSPIT.2005.1577162
Qu Xian-jie, Wang Zhao-qi, Xia Shi-hong, Liao Jin-tao
Recovery of 3D body pose is a fundamental problem for human motion analysis in many applications such as motion capture, vision interface, visual surveillance, and gesture recognition. In this paper, we present a new image-based approach to infer 3D human structure parameters from uncalibrated video. The estimation is example based. First, we acquire a special motion database through an off-line motion capture process. Second, given an uncalibrated motion video, we abstract the viewpoint and then the silhouettes database associated with 3D poses is built by projecting each data of the 3D motion database into 2D plane. Next, with the image silhouettes, the unknown structure parameters are inferred by performing a similarity search in the silhouettes database. We pay more attention on how to retrieving 3D body pose by matching 2D silhouette based on shape context. Through a lot of experiments, the results we got are really satisfying. To accelerate the process of calculating the distance in shape context, we use PCA (principal components analysis) to reduce the computation of complexity. We use trampoline sport, which is an example of complex human motion, to demonstrate the effectiveness of our approach and compare the results with those obtained with Hu moments method
{"title":"Estimating articulated human pose from video using shape context","authors":"Qu Xian-jie, Wang Zhao-qi, Xia Shi-hong, Liao Jin-tao","doi":"10.1109/ISSPIT.2005.1577162","DOIUrl":"https://doi.org/10.1109/ISSPIT.2005.1577162","url":null,"abstract":"Recovery of 3D body pose is a fundamental problem for human motion analysis in many applications such as motion capture, vision interface, visual surveillance, and gesture recognition. In this paper, we present a new image-based approach to infer 3D human structure parameters from uncalibrated video. The estimation is example based. First, we acquire a special motion database through an off-line motion capture process. Second, given an uncalibrated motion video, we abstract the viewpoint and then the silhouettes database associated with 3D poses is built by projecting each data of the 3D motion database into 2D plane. Next, with the image silhouettes, the unknown structure parameters are inferred by performing a similarity search in the silhouettes database. We pay more attention on how to retrieving 3D body pose by matching 2D silhouette based on shape context. Through a lot of experiments, the results we got are really satisfying. To accelerate the process of calculating the distance in shape context, we use PCA (principal components analysis) to reduce the computation of complexity. We use trampoline sport, which is an example of complex human motion, to demonstrate the effectiveness of our approach and compare the results with those obtained with Hu moments method","PeriodicalId":421826,"journal":{"name":"Proceedings of the Fifth IEEE International Symposium on Signal Processing and Information Technology, 2005.","volume":"138 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121365794","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2005-12-21DOI: 10.1109/ISSPIT.2005.1577139
J. D. Chimeh, H. Bakhshi, E. Karami
In this paper, a semi-blind beamforming algorithm is proposed for Orthogonal Frequency Division Multiplexing (OFDM) signals to combat multipath fading which is based on semi-blind beamforming of each sub-band in a OFDM system using Decision Directed Algorithm (DDA). The proposed algorithm is simulated for time varying channel with Clark model for block to block variation of the channel coefficients it is shown that without semi-blind operation error floor occurs in the performance of the sub-band beamformer but in the blind mode of the operation, by exploiting the information contents in the decisions the sub-band beamformer provides efficient bit error rate versus 0 / N Eb even in different block length and
针对正交频分复用(OFDM)信号的多径衰落问题,提出了一种基于决策定向算法(DDA)对OFDM系统各子带进行半盲波束形成的半盲波束形成算法。采用Clark模型对时变信道进行了信道系数块对块变化的仿真,结果表明,在半盲工作时,子带波束形成器的性能会出现误差层,但在盲工作模式下,通过利用决策中的信息内容,子带波束形成器即使在不同的块长度和带宽下也能提供相对于0 / N Eb的有效误码率
{"title":"Sub-band beamforming of OFDM signals in time varying multi-path fading channel","authors":"J. D. Chimeh, H. Bakhshi, E. Karami","doi":"10.1109/ISSPIT.2005.1577139","DOIUrl":"https://doi.org/10.1109/ISSPIT.2005.1577139","url":null,"abstract":"In this paper, a semi-blind beamforming algorithm is proposed for Orthogonal Frequency Division Multiplexing (OFDM) signals to combat multipath fading which is based on semi-blind beamforming of each sub-band in a OFDM system using Decision Directed Algorithm (DDA). The proposed algorithm is simulated for time varying channel with Clark model for block to block variation of the channel coefficients it is shown that without semi-blind operation error floor occurs in the performance of the sub-band beamformer but in the blind mode of the operation, by exploiting the information contents in the decisions the sub-band beamformer provides efficient bit error rate versus 0 / N Eb even in different block length and","PeriodicalId":421826,"journal":{"name":"Proceedings of the Fifth IEEE International Symposium on Signal Processing and Information Technology, 2005.","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126336162","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2005-12-21DOI: 10.1109/ISSPIT.2005.1577133
Liyu Liu, M. Amin
The performance of GPS delay lock loop (DLL) can be greatly compromised by multipath. This paper considers the effects of multipath on the GPS receiver. It evaluates the DLL performance in terms of the tracking error of the noncoherent early-minus-late power discriminator. We discuss the effect of precorrelation filter bandwidth on the autocorrelation function of the GPS spreading code and subsequently, on the DLL signal tracking. The tracking error, which is influenced by the precorrelation filter bandwidth, is analyzed
{"title":"Multipath and precorrelation filtering effect on GPS noncoherent early-minus-late power discriminators","authors":"Liyu Liu, M. Amin","doi":"10.1109/ISSPIT.2005.1577133","DOIUrl":"https://doi.org/10.1109/ISSPIT.2005.1577133","url":null,"abstract":"The performance of GPS delay lock loop (DLL) can be greatly compromised by multipath. This paper considers the effects of multipath on the GPS receiver. It evaluates the DLL performance in terms of the tracking error of the noncoherent early-minus-late power discriminator. We discuss the effect of precorrelation filter bandwidth on the autocorrelation function of the GPS spreading code and subsequently, on the DLL signal tracking. The tracking error, which is influenced by the precorrelation filter bandwidth, is analyzed","PeriodicalId":421826,"journal":{"name":"Proceedings of the Fifth IEEE International Symposium on Signal Processing and Information Technology, 2005.","volume":"485 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133731102","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2005-12-21DOI: 10.1109/ISSPIT.2005.1577196
F. Beritelli, S. Casale, S. Serrano
The paper presents an adaptive system for speech signal processing in the presence of loud background noise. The validity of the approach is confirmed by implementing a classification system for voiced and unvoiced (V/UV) speech frames. Genetic algorithms were used to select the parameters that offer the best V/UV classification in the presence of 4 different types of background noise and with 5 different SNRs. 20 neural network-based classification systems were then implemented, chosen dynamically frame by frame according to the output of a background noise recognition system and an SNR estimation system. The system was implemented and the tests performed using the TIMIT speech corpus and its phonetic classification. The results were compared with a non-adaptive classification system and the 3 V/UV detectors adopted by three important: LPClO, ITU-T G. 723.1 and ETSI AMR. In all cases the adaptive V/UV classifier clearly outperformed the others, confirming the validity of the adaptive approach
提出了一种适用于大背景噪声环境下语音信号处理的自适应系统。通过实现浊音和浊音(V/UV)语音帧的分类系统,验证了该方法的有效性。在4种不同类型的背景噪声和5种不同信噪比的情况下,使用遗传算法选择提供最佳V/UV分类的参数。然后实现了20个基于神经网络的分类系统,根据背景噪声识别系统和信噪比估计系统的输出逐帧动态选择。利用TIMIT语音语料库及其语音分类对系统进行了实现和测试。结果与非自适应分类系统和三种重要的3 V/UV检测器LPClO、ITU-T G. 723.1和ETSI AMR进行了比较。在所有情况下,自适应V/UV分类器明显优于其他分类器,证实了自适应方法的有效性
{"title":"Adaptive robust speech processing based on acoustic noise estimation and classification","authors":"F. Beritelli, S. Casale, S. Serrano","doi":"10.1109/ISSPIT.2005.1577196","DOIUrl":"https://doi.org/10.1109/ISSPIT.2005.1577196","url":null,"abstract":"The paper presents an adaptive system for speech signal processing in the presence of loud background noise. The validity of the approach is confirmed by implementing a classification system for voiced and unvoiced (V/UV) speech frames. Genetic algorithms were used to select the parameters that offer the best V/UV classification in the presence of 4 different types of background noise and with 5 different SNRs. 20 neural network-based classification systems were then implemented, chosen dynamically frame by frame according to the output of a background noise recognition system and an SNR estimation system. The system was implemented and the tests performed using the TIMIT speech corpus and its phonetic classification. The results were compared with a non-adaptive classification system and the 3 V/UV detectors adopted by three important: LPClO, ITU-T G. 723.1 and ETSI AMR. In all cases the adaptive V/UV classifier clearly outperformed the others, confirming the validity of the adaptive approach","PeriodicalId":421826,"journal":{"name":"Proceedings of the Fifth IEEE International Symposium on Signal Processing and Information Technology, 2005.","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129239093","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2005-12-21DOI: 10.1109/ISSPIT.2005.1577118
B. Sayadi, S. Ataman, I. Fijalkow
In this paper, we propose to combine the gains of a downlink power control and a joint multicode detection, for a HSDPA link. We propose an algorithm, which controls both the transmitted code powers and the joint multicode receiver filter coefficients. The proposed algorithm is iterative. At each iteration, first, the receiver filter coefficients of the multicode user are updated to reduce the inter-code interferences and then, the transmitted code powers are updated. So that, each spreading code of the multicode scheme creates the minimum possible interference to others while satisfying the quality of service requirement. This algorithm has as main goals to decrease inter-code interference and to increase the system capacity. Simulation is used to show the convergence of the proposed algorithm to a fixed point power vector where the multicode user satisfies its signal-to-interference ratio (SIR) target on each code
{"title":"Combined downlink power control and joint multicode receivers for downlink transmissions in high speed UMTS","authors":"B. Sayadi, S. Ataman, I. Fijalkow","doi":"10.1109/ISSPIT.2005.1577118","DOIUrl":"https://doi.org/10.1109/ISSPIT.2005.1577118","url":null,"abstract":"In this paper, we propose to combine the gains of a downlink power control and a joint multicode detection, for a HSDPA link. We propose an algorithm, which controls both the transmitted code powers and the joint multicode receiver filter coefficients. The proposed algorithm is iterative. At each iteration, first, the receiver filter coefficients of the multicode user are updated to reduce the inter-code interferences and then, the transmitted code powers are updated. So that, each spreading code of the multicode scheme creates the minimum possible interference to others while satisfying the quality of service requirement. This algorithm has as main goals to decrease inter-code interference and to increase the system capacity. Simulation is used to show the convergence of the proposed algorithm to a fixed point power vector where the multicode user satisfies its signal-to-interference ratio (SIR) target on each code","PeriodicalId":421826,"journal":{"name":"Proceedings of the Fifth IEEE International Symposium on Signal Processing and Information Technology, 2005.","volume":"5 57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131264131","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2005-12-21DOI: 10.1109/ISSPIT.2005.1577062
Jared C. Smolens, Jangwoo Kim, J. Hoe, B. Falsafi
Superscalar out-of-order micro architectures can be modified to support redundant execution of a program as two concurrent threads for soft-error detection. However, the extra workload from redundant execution incurs a performance penalty due to increased contention for resources throughout the datapath. We present four key parameters that affect performance of these designs, namely 1) issue and functional unit bandwidth, 2) issue queue and reorder buffer capacity, 3) decode and retirement bandwidth, and 4) coupling between redundant threads' instantaneous resource requirements. We then survey existing work in concurrent error detecting superscalar micro architectures and evaluate these proposals with respect to the four factors
{"title":"Understanding the performance of concurrent error detecting superscalar microarchitectures","authors":"Jared C. Smolens, Jangwoo Kim, J. Hoe, B. Falsafi","doi":"10.1109/ISSPIT.2005.1577062","DOIUrl":"https://doi.org/10.1109/ISSPIT.2005.1577062","url":null,"abstract":"Superscalar out-of-order micro architectures can be modified to support redundant execution of a program as two concurrent threads for soft-error detection. However, the extra workload from redundant execution incurs a performance penalty due to increased contention for resources throughout the datapath. We present four key parameters that affect performance of these designs, namely 1) issue and functional unit bandwidth, 2) issue queue and reorder buffer capacity, 3) decode and retirement bandwidth, and 4) coupling between redundant threads' instantaneous resource requirements. We then survey existing work in concurrent error detecting superscalar micro architectures and evaluate these proposals with respect to the four factors","PeriodicalId":421826,"journal":{"name":"Proceedings of the Fifth IEEE International Symposium on Signal Processing and Information Technology, 2005.","volume":"136 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128638946","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2005-12-21DOI: 10.1109/ISSPIT.2005.1577198
Qing Xu, Ruijuan Hu, Lianping Xing, Yuan Xu
Adaptive sampling is an interesting tool to lower noise, which is one of the main problems of Monte Carlo global illumination algorithms such as the famous and baseline Monte Carlo path tracing. The classic information measure, namely, Shannon entropy has been applied successfully for adaptive sampling in Monte Carlo path tracing. In this paper we investigate the generalized Renyi entropy to establish the refinement criteria to guide both pixel super sampling and pixel subdivision adaptively. Implementation results show that the adaptive sampling based on Renyi entropy outperforms the counterpart based on Shannon entropy consistently
{"title":"Adaptive sampling with Renyi entropy in Monte Carlo path tracing","authors":"Qing Xu, Ruijuan Hu, Lianping Xing, Yuan Xu","doi":"10.1109/ISSPIT.2005.1577198","DOIUrl":"https://doi.org/10.1109/ISSPIT.2005.1577198","url":null,"abstract":"Adaptive sampling is an interesting tool to lower noise, which is one of the main problems of Monte Carlo global illumination algorithms such as the famous and baseline Monte Carlo path tracing. The classic information measure, namely, Shannon entropy has been applied successfully for adaptive sampling in Monte Carlo path tracing. In this paper we investigate the generalized Renyi entropy to establish the refinement criteria to guide both pixel super sampling and pixel subdivision adaptively. Implementation results show that the adaptive sampling based on Renyi entropy outperforms the counterpart based on Shannon entropy consistently","PeriodicalId":421826,"journal":{"name":"Proceedings of the Fifth IEEE International Symposium on Signal Processing and Information Technology, 2005.","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129249001","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2005-12-21DOI: 10.1109/ISSPIT.2005.1577158
P. Belsis, S. Gritzalis, S. Katsikas
Coalitions between autonomous domains are often formed in real life scenarios in order to enable access permissions to shared objects on grounds of bilateral resource-sharing agreements. The dynamic nature of coalitions poses new challenges relative to security management and joint administration of resources; therefore we distinguish a need for reconciliation and extension support to single-domain oriented security models, so as to incorporate location, time and context based related parameters in their role definition schemes. In this paper, we introduce a robust and scalable solution that enables the realization of coalition formation in a multi-domain policy ruled environment
{"title":"A scalable security architecture enabling coalition formation between autonomous domains","authors":"P. Belsis, S. Gritzalis, S. Katsikas","doi":"10.1109/ISSPIT.2005.1577158","DOIUrl":"https://doi.org/10.1109/ISSPIT.2005.1577158","url":null,"abstract":"Coalitions between autonomous domains are often formed in real life scenarios in order to enable access permissions to shared objects on grounds of bilateral resource-sharing agreements. The dynamic nature of coalitions poses new challenges relative to security management and joint administration of resources; therefore we distinguish a need for reconciliation and extension support to single-domain oriented security models, so as to incorporate location, time and context based related parameters in their role definition schemes. In this paper, we introduce a robust and scalable solution that enables the realization of coalition formation in a multi-domain policy ruled environment","PeriodicalId":421826,"journal":{"name":"Proceedings of the Fifth IEEE International Symposium on Signal Processing and Information Technology, 2005.","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127341260","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2005-12-21DOI: 10.1109/ISSPIT.2005.1577060
D. G. Pérez, H. Berry, O. Temam
While the recent surge of research articles on sampling started with rather large sample sizes, it has later shifted to very small intervals, and it is now converging to intermediate sizes, and even to varying sizes. With 100M samples, warm-up is not an issue, at least with current cache sizes. However, with significantly smaller samples, warm-up becomes critical, especially when the sampling target accuracy is of the order of a few percent. However, in most sampling research works, warm-up has largely been treated as a separate issue. In this article, we advocate for an integrated approach at (simulator-based) warm-up and sampling. Instead of separating warm-up and sampling, we take exactly the opposite approach, provide a common instruction budget for warm-up and sampling, and we attempt to spend it as wisely as possible on either one. This budget and integrated approach at warm-up and sampling achieves an average CPI error of 1.68% on the 26 Spec benchmarks with an average sampling size of 288 millions instructions, and at the same time, it relieves the user from any delicate decision such as setting the sampling or warm-up sizes, thanks to the integrated warm-up+sampling and the region partitioning approaches
{"title":"Budgeted region sampling (BeeRS): do not separate sampling from warm-up, and then spend wisely your simulation budget","authors":"D. G. Pérez, H. Berry, O. Temam","doi":"10.1109/ISSPIT.2005.1577060","DOIUrl":"https://doi.org/10.1109/ISSPIT.2005.1577060","url":null,"abstract":"While the recent surge of research articles on sampling started with rather large sample sizes, it has later shifted to very small intervals, and it is now converging to intermediate sizes, and even to varying sizes. With 100M samples, warm-up is not an issue, at least with current cache sizes. However, with significantly smaller samples, warm-up becomes critical, especially when the sampling target accuracy is of the order of a few percent. However, in most sampling research works, warm-up has largely been treated as a separate issue. In this article, we advocate for an integrated approach at (simulator-based) warm-up and sampling. Instead of separating warm-up and sampling, we take exactly the opposite approach, provide a common instruction budget for warm-up and sampling, and we attempt to spend it as wisely as possible on either one. This budget and integrated approach at warm-up and sampling achieves an average CPI error of 1.68% on the 26 Spec benchmarks with an average sampling size of 288 millions instructions, and at the same time, it relieves the user from any delicate decision such as setting the sampling or warm-up sizes, thanks to the integrated warm-up+sampling and the region partitioning approaches","PeriodicalId":421826,"journal":{"name":"Proceedings of the Fifth IEEE International Symposium on Signal Processing and Information Technology, 2005.","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127201681","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2005-12-21DOI: 10.1109/ISSPIT.2005.1577084
A. Yildirim, M. Efe, A.K. Ozdemir
In this paper we analyze the peak picking losses induced by conventional radar signal processors, which assume a point target model for detection and tracking. As demonstrated through simulations, the performance degradation under the point target assumption can be significant for high-resolution radars, where targets extend across several detection cells. Interpolation of nearby data around the detected peak provides only a slight improvement. This paper presents a maximum likelihood estimator (MLE) to reduce the peak picking losses. By comparing the variance of the estimator with the Cramer Rao lower bound derived in this paper, it has been shown that the maximum likelihood estimator significantly reduces peak picking losses
{"title":"Peak picking losses in radar detectors","authors":"A. Yildirim, M. Efe, A.K. Ozdemir","doi":"10.1109/ISSPIT.2005.1577084","DOIUrl":"https://doi.org/10.1109/ISSPIT.2005.1577084","url":null,"abstract":"In this paper we analyze the peak picking losses induced by conventional radar signal processors, which assume a point target model for detection and tracking. As demonstrated through simulations, the performance degradation under the point target assumption can be significant for high-resolution radars, where targets extend across several detection cells. Interpolation of nearby data around the detected peak provides only a slight improvement. This paper presents a maximum likelihood estimator (MLE) to reduce the peak picking losses. By comparing the variance of the estimator with the Cramer Rao lower bound derived in this paper, it has been shown that the maximum likelihood estimator significantly reduces peak picking losses","PeriodicalId":421826,"journal":{"name":"Proceedings of the Fifth IEEE International Symposium on Signal Processing and Information Technology, 2005.","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133578309","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}