Pub Date : 2009-03-07DOI: 10.1109/AERO.2009.4839458
H. Lim, M. MatJafri, K. Abdullah, C. J. Wong, N. M. Saleh
This paper presents an economical analysis of land cover in Malaysia. Land cover classification from remotely sensed data is an important topic in remote sensing applications. We attempted to investigate the feasibility of using a conventional digital camera for acquiring high resolution imagery for land use/cover mapping. The objective of this study is to test the high-resolution digital camera imagery for land cover mapping using remote sensing technique. The study area is the Merbok River estuary, Kedah and Timah Tasoh Lake, Perlis, both located in Peninsular Malaysia. The digital images were taken from a low-attitude light aircraft. A Kodak camera, model DC290, was used to capture images from an elevation of 8000 feet on board Cessna 172Q. The use of a digital camera as a sensor to capture digital images is cheaper and more economical compared to the use of other airborne sensors. This technique overcomes the problem of the difficulty in obtaining cloud-free scenes in the Equatorial region from a satellite platform. The images consisted of the three visible bands-red, green, and blue. Supervised classification technique (Maximum Likelihood, ML, Minimum Distance-to-Mean, MDM, and Parallelepiped, P) was applied to the digital camera spectral bands (red, green and blue) to extract the thematic information from the acquired scenes. The accuracy of each classification map produced was validated using the reference data sets consisting of a large number of samples collected per category. The results produced a high degree of accuracy. This study indicates that the use of a conventional digital camera as a sensor in remote sensing studies can provide useful information for planning and development of a small area of coverage
{"title":"Regional land use/cover classification in Malaysia Based on conventional digital camera imageries","authors":"H. Lim, M. MatJafri, K. Abdullah, C. J. Wong, N. M. Saleh","doi":"10.1109/AERO.2009.4839458","DOIUrl":"https://doi.org/10.1109/AERO.2009.4839458","url":null,"abstract":"This paper presents an economical analysis of land cover in Malaysia. Land cover classification from remotely sensed data is an important topic in remote sensing applications. We attempted to investigate the feasibility of using a conventional digital camera for acquiring high resolution imagery for land use/cover mapping. The objective of this study is to test the high-resolution digital camera imagery for land cover mapping using remote sensing technique. The study area is the Merbok River estuary, Kedah and Timah Tasoh Lake, Perlis, both located in Peninsular Malaysia. The digital images were taken from a low-attitude light aircraft. A Kodak camera, model DC290, was used to capture images from an elevation of 8000 feet on board Cessna 172Q. The use of a digital camera as a sensor to capture digital images is cheaper and more economical compared to the use of other airborne sensors. This technique overcomes the problem of the difficulty in obtaining cloud-free scenes in the Equatorial region from a satellite platform. The images consisted of the three visible bands-red, green, and blue. Supervised classification technique (Maximum Likelihood, ML, Minimum Distance-to-Mean, MDM, and Parallelepiped, P) was applied to the digital camera spectral bands (red, green and blue) to extract the thematic information from the acquired scenes. The accuracy of each classification map produced was validated using the reference data sets consisting of a large number of samples collected per category. The results produced a high degree of accuracy. This study indicates that the use of a conventional digital camera as a sensor in remote sensing studies can provide useful information for planning and development of a small area of coverage","PeriodicalId":117250,"journal":{"name":"2009 IEEE Aerospace conference","volume":"115 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115124876","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-03-07DOI: 10.1109/AERO.2009.4839536
Pan Xiaogang, Zhou Haiyin
Space Based technology is the tendency of new technology of space surveillance since the Space-Based Visible (SBV) gained great success within its experiment mission. With advantages of covering rate and multi-goals exploring capability, the Space Based technology had been applied to many fields such as LEO satellite determination based on GPS, midcourse ballistic missile tracking and so on, the main challenge for space based observation is errors of platform which can badly contaminate the observation data. In space, the SBV platform will be shaken with high frequency error and the axes will be circumvolved with low frequency, so the observation data contains not only stochastic errors data but also system errors data with different frequency. How to describe the system error and improve the orbit determination of spacecraft based on space based observation data is the goal of this paper. Combine orbit determination method is to deal with the SBV satellite orbit and space object satellite orbit synchronously, so the system error of SBV will be restrained. Batch orbit determination method and combine orbit determination algorithm were involved in the paper. By generating a new strategy to analyz the residuals of orbit determination, it is designed to produce the near true environment observation error model, and to compensate in calculation. Thus, semi-parametric non linear model was introduced in this paper, which can distinctly describe the LEO satellite observation model and dynamic model based on Space Based Surveillance System, and a nonparametric estimator was proposed to solve the semi-parametric non linear model, finally the Fourier Transform Method for non-parametric parts was applied to decompose the different system signals in orbit to distinguish the dynamic error model and the observation error model or satellite platform error of space based satellite.
{"title":"Method of combine orbit determination and its application in Space Based Technology","authors":"Pan Xiaogang, Zhou Haiyin","doi":"10.1109/AERO.2009.4839536","DOIUrl":"https://doi.org/10.1109/AERO.2009.4839536","url":null,"abstract":"Space Based technology is the tendency of new technology of space surveillance since the Space-Based Visible (SBV) gained great success within its experiment mission. With advantages of covering rate and multi-goals exploring capability, the Space Based technology had been applied to many fields such as LEO satellite determination based on GPS, midcourse ballistic missile tracking and so on, the main challenge for space based observation is errors of platform which can badly contaminate the observation data. In space, the SBV platform will be shaken with high frequency error and the axes will be circumvolved with low frequency, so the observation data contains not only stochastic errors data but also system errors data with different frequency. How to describe the system error and improve the orbit determination of spacecraft based on space based observation data is the goal of this paper. Combine orbit determination method is to deal with the SBV satellite orbit and space object satellite orbit synchronously, so the system error of SBV will be restrained. Batch orbit determination method and combine orbit determination algorithm were involved in the paper. By generating a new strategy to analyz the residuals of orbit determination, it is designed to produce the near true environment observation error model, and to compensate in calculation. Thus, semi-parametric non linear model was introduced in this paper, which can distinctly describe the LEO satellite observation model and dynamic model based on Space Based Surveillance System, and a nonparametric estimator was proposed to solve the semi-parametric non linear model, finally the Fourier Transform Method for non-parametric parts was applied to decompose the different system signals in orbit to distinguish the dynamic error model and the observation error model or satellite platform error of space based satellite.","PeriodicalId":117250,"journal":{"name":"2009 IEEE Aerospace conference","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120962633","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-03-07DOI: 10.1109/AERO.2009.4839545
R. Linderman, S. Spetka, S. Emeny, D. Fitzgerald
The parallelization strategy of the Physically-Constrained Iterative Deconvolution (PCID) algorithm is being altered and optimized to enhance performance on emerging multi-core architectures. This paper reports results from porting PCID to multi-core architectures including the JAWS supercomputer at the Maui HPC Center (60 TFLOPS of dual-dual Xeon® nodes) and the Cell Cluster at AFRL in Rome, NY (52 TFLOPS of Playstation 3® nodes with IBM Cell Broadband Engine® multi-cores and 14 dual-quad Xeon headnodes). For 512×512 image sizes FFT performance exceeding 60 GFLOPS has been observed on dual-quad Xeon nodes. Multi-core architectures programmed with multiple threads delivered significantly better performance for parallelization of the low level image convolution operations compared to earlier parallelization across cluster nodes with MPI. Another focus of the PCID multi-core effort was to move from MPI message passing to a publish-subscribe-query approach to information management. The publish, subscribe and query infrastructure was optimized for large scale machines, such as JAWS, and features a “loose coupling“ of publishers to subscribers through intervening brokers. This change makes runs on large HPCs with thousands of intercommunicating cores more flexible and more fault tolerant.
{"title":"Parallelizing a multi-frame blind deconvolution algorithm on clusters of multicore processors","authors":"R. Linderman, S. Spetka, S. Emeny, D. Fitzgerald","doi":"10.1109/AERO.2009.4839545","DOIUrl":"https://doi.org/10.1109/AERO.2009.4839545","url":null,"abstract":"The parallelization strategy of the Physically-Constrained Iterative Deconvolution (PCID) algorithm is being altered and optimized to enhance performance on emerging multi-core architectures. This paper reports results from porting PCID to multi-core architectures including the JAWS supercomputer at the Maui HPC Center (60 TFLOPS of dual-dual Xeon® nodes) and the Cell Cluster at AFRL in Rome, NY (52 TFLOPS of Playstation 3® nodes with IBM Cell Broadband Engine® multi-cores and 14 dual-quad Xeon headnodes). For 512×512 image sizes FFT performance exceeding 60 GFLOPS has been observed on dual-quad Xeon nodes. Multi-core architectures programmed with multiple threads delivered significantly better performance for parallelization of the low level image convolution operations compared to earlier parallelization across cluster nodes with MPI. Another focus of the PCID multi-core effort was to move from MPI message passing to a publish-subscribe-query approach to information management. The publish, subscribe and query infrastructure was optimized for large scale machines, such as JAWS, and features a “loose coupling“ of publishers to subscribers through intervening brokers. This change makes runs on large HPCs with thousands of intercommunicating cores more flexible and more fault tolerant.","PeriodicalId":117250,"journal":{"name":"2009 IEEE Aerospace conference","volume":"131 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127360708","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-03-07DOI: 10.1109/AERO.2009.4839470
Y. R. Zheng, R. Lynch
Two variable step-size normalized least mean square (VSS-NLMS) algorithms, namely the Non-Parametric VSS-NLMS and Switched Mode VSS-NLMS, are reformulated into complex signal form for STAP applications. The performances of these two VSS NLMS algorithms in Gaussian and compound-K clutters are evaluated via a phased array space-slow-time STAP example. We find that the misadjustment behaviors are inconsistent with the excess MSEs which is a better measure of STAP performance. Both VSS-NLMS algorithms outperform conventional fixed step-size (FSS) NLMS algorithms with fast convergence and low steady-state excess MSE. The SM-VSS-NLMS provides a better performance compromise than the NP-VSS-NLMS with much lower steady-state excess MSEs and slightly slower convergence speeds. The performance gain of both VSS algorithms reduces in heavy-tailed clutter environments than that in Gaussian clutters. Their robustness against impulsive interference is better than conventional FSS-NLMS.
{"title":"Performances of variable step-size adaptive algorithms in non-Gaussian interference environments","authors":"Y. R. Zheng, R. Lynch","doi":"10.1109/AERO.2009.4839470","DOIUrl":"https://doi.org/10.1109/AERO.2009.4839470","url":null,"abstract":"Two variable step-size normalized least mean square (VSS-NLMS) algorithms, namely the Non-Parametric VSS-NLMS and Switched Mode VSS-NLMS, are reformulated into complex signal form for STAP applications. The performances of these two VSS NLMS algorithms in Gaussian and compound-K clutters are evaluated via a phased array space-slow-time STAP example. We find that the misadjustment behaviors are inconsistent with the excess MSEs which is a better measure of STAP performance. Both VSS-NLMS algorithms outperform conventional fixed step-size (FSS) NLMS algorithms with fast convergence and low steady-state excess MSE. The SM-VSS-NLMS provides a better performance compromise than the NP-VSS-NLMS with much lower steady-state excess MSEs and slightly slower convergence speeds. The performance gain of both VSS algorithms reduces in heavy-tailed clutter environments than that in Gaussian clutters. Their robustness against impulsive interference is better than conventional FSS-NLMS.","PeriodicalId":117250,"journal":{"name":"2009 IEEE Aerospace conference","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127474073","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-03-07DOI: 10.1109/AERO.2009.4839482
R. Streit, R. Wojtowicz
Likelihood function decomposition is a technique to coordinate deployed fields of multiple diverse heterogeneous sensors and for the automated processing of large volumes of multisensor data. It is an innovative new concept that is potentially useful in many of the kinds of nonlinear problems that arise in sensor fields used for detection, classification, and localization. Algorithms derived via the likelihood decompositionmethod are of interest because they have linear computational complexity in many of the parameters in distributed networked sensors — the number targets, the number of measurements, and the number of sensors. This efficiency is complemented with the ease with which the decompositions can be adapted to important application requirements such as land mass avoidance and ID/classification tags. The decomposition method also provides a natural way to exploit the spatial diversity of a sensor field to enable estimation of the aspect dependent targets. Observed information matrices derived from the likelihood decompositions can be exploited to maintain control of the field. The likelihood function decomposition method also simplifies the unconditional data likelihood function, enabling it to be written as an integral that is independent of the dimension of the target state space. This greatly reduces the computational complexity of the clutter rejection problem
{"title":"A general likelihood function decomposition that is linear in target state","authors":"R. Streit, R. Wojtowicz","doi":"10.1109/AERO.2009.4839482","DOIUrl":"https://doi.org/10.1109/AERO.2009.4839482","url":null,"abstract":"Likelihood function decomposition is a technique to coordinate deployed fields of multiple diverse heterogeneous sensors and for the automated processing of large volumes of multisensor data. It is an innovative new concept that is potentially useful in many of the kinds of nonlinear problems that arise in sensor fields used for detection, classification, and localization. Algorithms derived via the likelihood decompositionmethod are of interest because they have linear computational complexity in many of the parameters in distributed networked sensors — the number targets, the number of measurements, and the number of sensors. This efficiency is complemented with the ease with which the decompositions can be adapted to important application requirements such as land mass avoidance and ID/classification tags. The decomposition method also provides a natural way to exploit the spatial diversity of a sensor field to enable estimation of the aspect dependent targets. Observed information matrices derived from the likelihood decompositions can be exploited to maintain control of the field. The likelihood function decomposition method also simplifies the unconditional data likelihood function, enabling it to be written as an integral that is independent of the dimension of the target state space. This greatly reduces the computational complexity of the clutter rejection problem","PeriodicalId":117250,"journal":{"name":"2009 IEEE Aerospace conference","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124999479","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-03-07DOI: 10.1109/AERO.2009.4839466
R. Goullioud, J. Marr, M. Shao, G. Marcy
Planet Hunter is a design for a space borne astrometric mission. Planet Hunter utilizes technology developed for the Space Interferometry Mission (SIM). The instrument consists of two Michelson stellar interferometers and a telescope. The first interferometer chops between the target star and a set of reference stars. The second interferometer monitors the attitude of the instrument in the direction of the target star. The telescope monitors the attitude of the instrument in the other two directions.
{"title":"Search for earth-analogs with the Planet Hunter Mission","authors":"R. Goullioud, J. Marr, M. Shao, G. Marcy","doi":"10.1109/AERO.2009.4839466","DOIUrl":"https://doi.org/10.1109/AERO.2009.4839466","url":null,"abstract":"Planet Hunter is a design for a space borne astrometric mission. Planet Hunter utilizes technology developed for the Space Interferometry Mission (SIM). The instrument consists of two Michelson stellar interferometers and a telescope. The first interferometer chops between the target star and a set of reference stars. The second interferometer monitors the attitude of the instrument in the direction of the target star. The telescope monitors the attitude of the instrument in the other two directions.","PeriodicalId":117250,"journal":{"name":"2009 IEEE Aerospace conference","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125057923","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-03-07DOI: 10.1109/AERO.2009.4839631
G. Sacco, K. Barltrop, Cin-Young Lee, G. Horvath, R. Terrile, Seungwon Lee
Most complex systems nowadays heavily rely on software, and spacecraft and satellite systems are no exception. Moreover as systems capabilities increase, the corresponding software required to integrate and address system tasks becomes more complex. Hence, in order to guarantee a system's success, testing of the software becomes imperative. Traditionally exhaustive testing of all possible behaviors was conducted. However, given the increased complexity and number of interacting behaviors of current systems, the time required for such thorough testing is prohibitive. As a result many have adopted random testing techniques to achieve sufficient coverage of the test space within a reasonable amount of time. In this paper we propose the use of genetic algorithms (GA) to greatly reduce the number of tests performed, while still maintaining the same level of confidence as current random testing approaches. We present a GA specifically tailored for the systems testing domain. In order to validate our algorithm we used the results from the Dawn test campaign. Preliminary results seem very encouraging, showing that our approach, when searching the worst test cases, outperforms random search , limiting the search to a mere 6 % of the full search domain.
{"title":"Application of genetic algorithm for flight system verification and validation","authors":"G. Sacco, K. Barltrop, Cin-Young Lee, G. Horvath, R. Terrile, Seungwon Lee","doi":"10.1109/AERO.2009.4839631","DOIUrl":"https://doi.org/10.1109/AERO.2009.4839631","url":null,"abstract":"Most complex systems nowadays heavily rely on software, and spacecraft and satellite systems are no exception. Moreover as systems capabilities increase, the corresponding software required to integrate and address system tasks becomes more complex. Hence, in order to guarantee a system's success, testing of the software becomes imperative. Traditionally exhaustive testing of all possible behaviors was conducted. However, given the increased complexity and number of interacting behaviors of current systems, the time required for such thorough testing is prohibitive. As a result many have adopted random testing techniques to achieve sufficient coverage of the test space within a reasonable amount of time. In this paper we propose the use of genetic algorithms (GA) to greatly reduce the number of tests performed, while still maintaining the same level of confidence as current random testing approaches. We present a GA specifically tailored for the systems testing domain. In order to validate our algorithm we used the results from the Dawn test campaign. Preliminary results seem very encouraging, showing that our approach, when searching the worst test cases, outperforms random search , limiting the search to a mere 6 % of the full search domain.","PeriodicalId":117250,"journal":{"name":"2009 IEEE Aerospace conference","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125154969","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-03-07DOI: 10.1109/AERO.2009.4839665
A. Fijany, F. Vatan
In this paper we present a new efficient algorithmic method for generating the Analytical Redundancy Relations (ARRs). ARRs are one of the crucial tools for model-based diagnosis as well as for optimizing, analyzing, and validating the system of sensors. However, despite the importance of the ARRs for both system diagnosis and sensor optimization, it seems that less attention has been paid to the development of systematic and efficient approaches for their generation. In this paper we discuss the complexity in derivation of ARRs and present a new efficient algorithm for their derivation. Given a system with a set of L ARRs, our algorithm achieves a complexity of O(L4) for generating the ARRs. To our knowledge, this is the first algorithm with a polynomial complexity for derivation of ARRs. We also present the results of application of our algorithms, for generating the complete set of ARRs, to both synthetic and industrial examples.
{"title":"A new efficient method for system structural analysis and generating Analytical Redundancy Relations","authors":"A. Fijany, F. Vatan","doi":"10.1109/AERO.2009.4839665","DOIUrl":"https://doi.org/10.1109/AERO.2009.4839665","url":null,"abstract":"In this paper we present a new efficient algorithmic method for generating the Analytical Redundancy Relations (ARRs). ARRs are one of the crucial tools for model-based diagnosis as well as for optimizing, analyzing, and validating the system of sensors. However, despite the importance of the ARRs for both system diagnosis and sensor optimization, it seems that less attention has been paid to the development of systematic and efficient approaches for their generation. In this paper we discuss the complexity in derivation of ARRs and present a new efficient algorithm for their derivation. Given a system with a set of L ARRs, our algorithm achieves a complexity of O(L4) for generating the ARRs. To our knowledge, this is the first algorithm with a polynomial complexity for derivation of ARRs. We also present the results of application of our algorithms, for generating the complete set of ARRs, to both synthetic and industrial examples.","PeriodicalId":117250,"journal":{"name":"2009 IEEE Aerospace conference","volume":"101 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125974218","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-03-07DOI: 10.1109/AERO.2009.4839341
C. Stallo, M. Lucente, T. Rossi, E. Cianca, M. Ruggieri, A. Paraboni, C. Cornacchini, A. Vernucci, M. T. Nocerino, A. Ceccarelli, L. Bruca, G. Codispoti, M. De Sanctis
Since the 70s Italy has had a pioneering approach to higher frequencies, at first at Ka band (20/30 GHz) with the Sirio experience (launched in 1978), when such a range was still a frontier, and then with Italsat F1 and F2 experiments in the 90s [1], studying Q and V bands in addition to Ka one as well. After those experiences, Italy through the Italian Space Agency (ASI) was one of the first European countries that have made an effort toward the exploitation of Q/V band in telecommunications. In 2004 ASI funded a feasibility study (phase A), called TRANSPONDERS, Italian acronym for “research, analysis and study of Q/V payloads for telecommunications”, aimed at studying and designing a payload to be used to fully characterize the channel at Q/V bands and to test novel adaptive interference/fading mitigation techniques such as ACM (Adaptive Coding and Modulation). Finally, the feasibility and performance of preliminary broadband services in such frequencies can be verified through this study .A new phase has recently started (April 2008), called TRANSPONDERS-2 and leaded by Space Engineering S.p.A., to continue the achievements gained during the first phase. In this scenario, it is mandatory to identify pre-operative experimental missions aiming at fully verifying the feasibility of future Q/V bands satellite telecommunication applications. The experimental goals are mainly to test the effectiveness of Propagation Impairment Mitigation Techniques (PIMTs) [2] in such frequency bands and the minimization of implementation risks for operative system characterized by a series of technological challenges.
{"title":"TRANSPONDERS: Research and analysis for the development of telecommunication payloads in Q/v bands","authors":"C. Stallo, M. Lucente, T. Rossi, E. Cianca, M. Ruggieri, A. Paraboni, C. Cornacchini, A. Vernucci, M. T. Nocerino, A. Ceccarelli, L. Bruca, G. Codispoti, M. De Sanctis","doi":"10.1109/AERO.2009.4839341","DOIUrl":"https://doi.org/10.1109/AERO.2009.4839341","url":null,"abstract":"Since the 70s Italy has had a pioneering approach to higher frequencies, at first at Ka band (20/30 GHz) with the Sirio experience (launched in 1978), when such a range was still a frontier, and then with Italsat F1 and F2 experiments in the 90s [1], studying Q and V bands in addition to Ka one as well. After those experiences, Italy through the Italian Space Agency (ASI) was one of the first European countries that have made an effort toward the exploitation of Q/V band in telecommunications. In 2004 ASI funded a feasibility study (phase A), called TRANSPONDERS, Italian acronym for “research, analysis and study of Q/V payloads for telecommunications”, aimed at studying and designing a payload to be used to fully characterize the channel at Q/V bands and to test novel adaptive interference/fading mitigation techniques such as ACM (Adaptive Coding and Modulation). Finally, the feasibility and performance of preliminary broadband services in such frequencies can be verified through this study .A new phase has recently started (April 2008), called TRANSPONDERS-2 and leaded by Space Engineering S.p.A., to continue the achievements gained during the first phase. In this scenario, it is mandatory to identify pre-operative experimental missions aiming at fully verifying the feasibility of future Q/V bands satellite telecommunication applications. The experimental goals are mainly to test the effectiveness of Propagation Impairment Mitigation Techniques (PIMTs) [2] in such frequency bands and the minimization of implementation risks for operative system characterized by a series of technological challenges.","PeriodicalId":117250,"journal":{"name":"2009 IEEE Aerospace conference","volume":"155 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123780276","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-03-07DOI: 10.1109/AERO.2009.4839729
Young H. Lee, Kevin A. Ingoldsby, Roger A. Galpin
In 2004, the Vision for Space Exploration (VSE) was announced by the United States President's Administration in an effort to explore space and to extend a human presence across our solar system. Subsequently, NASA established the Exploration Systems Mission Directorate (ESMD) to develop a constellation of new capabilities, supporting technologies, and foundational research that allows for the sustained and affordable exploration of space. Then, ESMD specified the primary mission for the Constellation Program (CxP)—to carry out a series of human expeditions, ranging from Low Earth Orbit (LEO) to the surface of Mars and beyond for the purposes of conducting human exploration of space. The CxP was established at the Lyndon B. Johnson Space Center (JSC) to manage the development of the flight and ground infrastructure and systems that require enabling continued and extended human access to space.
{"title":"Constellation program's stretch goal requirements","authors":"Young H. Lee, Kevin A. Ingoldsby, Roger A. Galpin","doi":"10.1109/AERO.2009.4839729","DOIUrl":"https://doi.org/10.1109/AERO.2009.4839729","url":null,"abstract":"In 2004, the Vision for Space Exploration (VSE) was announced by the United States President's Administration in an effort to explore space and to extend a human presence across our solar system. Subsequently, NASA established the Exploration Systems Mission Directorate (ESMD) to develop a constellation of new capabilities, supporting technologies, and foundational research that allows for the sustained and affordable exploration of space. Then, ESMD specified the primary mission for the Constellation Program (CxP)—to carry out a series of human expeditions, ranging from Low Earth Orbit (LEO) to the surface of Mars and beyond for the purposes of conducting human exploration of space. The CxP was established at the Lyndon B. Johnson Space Center (JSC) to manage the development of the flight and ground infrastructure and systems that require enabling continued and extended human access to space.","PeriodicalId":117250,"journal":{"name":"2009 IEEE Aerospace conference","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125348591","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}