The multi-mission Dshell++ simulation framework has formed the basis for high-performance, physics-based simulations for a large variety of space mission simulations including cruise and orbiter spacecraft, Entry, Descent, Landing (EDL) missions as well as for planetary surface rovers. The Dspace interactive, reusable 3D visualization system has been developed to support the diverse visualization needs of such complex real-time simulations
{"title":"Dspace: Real-Time 3D Visualization System for Spacecraft Dynamics Simulation","authors":"M. Pomerantz, Abhinandan Jain, Steven Myint","doi":"10.1109/SMC-IT.2009.36","DOIUrl":"https://doi.org/10.1109/SMC-IT.2009.36","url":null,"abstract":"The multi-mission Dshell++ simulation framework has formed the basis for high-performance, physics-based simulations for a large variety of space mission simulations including cruise and orbiter spacecraft, Entry, Descent, Landing (EDL) missions as well as for planetary surface rovers. The Dspace interactive, reusable 3D visualization system has been developed to support the diverse visualization needs of such complex real-time simulations","PeriodicalId":422009,"journal":{"name":"2009 Third IEEE International Conference on Space Mission Challenges for Information Technology","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121689483","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper we present a new efficient algorithmic method for generating the conflicts set for model based diagnosis. Our new method combines the strength of the two different approaches proposed in the literature, that is, the fault detection and isolation (FDI), which is based on automatic control theory and statistical decision theory, and the other one, known as DX, which is based on artificial intelligence techniques. The first building block in our method is a new efficient algorithm for generation of the complete set of analytical redundancy relations (ARRs) for the system in an implicit form. For the diagnosis, our method first performs (similar to DX approaches) a system simulation to calculate the expected values of the measurements. Any discrepancy, i.e., the difference between expected and actual value of measurement, would trigger our diagnosis process. To this end, only those ARRs which involve the measurement with discrepancy are checked for consistency which lead a to a significant reduction in the number of consistency checks usually performed by DX approaches. We demonstrate the efficiency of our new method by its application to several synthetic systems and compare it with that of GDE.
{"title":"A Novel Efficient Method for Conflicts Set Generation for Model-Based Diagnosis","authors":"A. Fijany, F. Vatan, A. Barrett","doi":"10.1109/SMC-IT.2009.58","DOIUrl":"https://doi.org/10.1109/SMC-IT.2009.58","url":null,"abstract":"In this paper we present a new efficient algorithmic method for generating the conflicts set for model based diagnosis. Our new method combines the strength of the two different approaches proposed in the literature, that is, the fault detection and isolation (FDI), which is based on automatic control theory and statistical decision theory, and the other one, known as DX, which is based on artificial intelligence techniques. The first building block in our method is a new efficient algorithm for generation of the complete set of analytical redundancy relations (ARRs) for the system in an implicit form. For the diagnosis, our method first performs (similar to DX approaches) a system simulation to calculate the expected values of the measurements. Any discrepancy, i.e., the difference between expected and actual value of measurement, would trigger our diagnosis process. To this end, only those ARRs which involve the measurement with discrepancy are checked for consistency which lead a to a significant reduction in the number of consistency checks usually performed by DX approaches. We demonstrate the efficiency of our new method by its application to several synthetic systems and compare it with that of GDE.","PeriodicalId":422009,"journal":{"name":"2009 Third IEEE International Conference on Space Mission Challenges for Information Technology","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127111500","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Because solar system exploration often involves highly complex missions, a strong infrastructure to support knowledge sharing among engineers is critical to facilitate problem solving and preserve solutions for future work. Any system developed to meet these needs must include the ability to rapidly disseminate content, connect users to the right information regardless of where it is stored, and connect people with the experts and they need. This paper describes the knowledge sharing solution called the NASA Engineering Network, which includes a faceted search, communities of practice, an expertise locator, and a centralized engineering portal. The paper includes details on how the components were developed and provides best practices that other aerospace organizations might use when implementing similar systems.
{"title":"Enhancing Collaboration among NASA Engineers through a Knowledge Sharing System","authors":"D. Topousis, E. Means, Keri S. Murphy, Manson Yew","doi":"10.1109/SMC-IT.2009.29","DOIUrl":"https://doi.org/10.1109/SMC-IT.2009.29","url":null,"abstract":"Because solar system exploration often involves highly complex missions, a strong infrastructure to support knowledge sharing among engineers is critical to facilitate problem solving and preserve solutions for future work. Any system developed to meet these needs must include the ability to rapidly disseminate content, connect users to the right information regardless of where it is stored, and connect people with the experts and they need. This paper describes the knowledge sharing solution called the NASA Engineering Network, which includes a faceted search, communities of practice, an expertise locator, and a centralized engineering portal. The paper includes details on how the components were developed and provides best practices that other aerospace organizations might use when implementing similar systems.","PeriodicalId":422009,"journal":{"name":"2009 Third IEEE International Conference on Space Mission Challenges for Information Technology","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128051589","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
R. Granat, K. Wagstaff, B. Bornstein, Benyang Tang, M. Turmon
Spacecraft processors and memory are subjected to high radiation doses and therefore employ radiation-hardened components. However, these components are orders of magnitude more expensive than typical desktop components, and they lag years behind in terms of speed and size. We have integrated algorithm-based fault tolerance (ABFT) methods into onboard data analysis algorithms to detect radiation-induced errors, which ultimately may permit the use of spacecraft memory that need not be fully hardened, reducing cost and increasing capability at the same time. We have also developed a lightweight software radiation simulator, BITFLIPS, that permits evaluation of error detection strategies in a controlled fashion, including the specification of the radiation rate and selective exposure of individual data structures. Using BITFLIPS, we evaluated our error detection methods when using a support vector machine to analyze data collected by the Mars Odyssey spacecraft. We observed good performance from both an existing ABFT method for matrix multiplication and a novel ABFT method for exponentiation. These techniques bring us a step closer to "rad-hard" machine learning algorithms.
{"title":"Simulating and Detecting Radiation-Induced Errors for Onboard Machine Learning","authors":"R. Granat, K. Wagstaff, B. Bornstein, Benyang Tang, M. Turmon","doi":"10.1109/SMC-IT.2009.22","DOIUrl":"https://doi.org/10.1109/SMC-IT.2009.22","url":null,"abstract":"Spacecraft processors and memory are subjected to high radiation doses and therefore employ radiation-hardened components. However, these components are orders of magnitude more expensive than typical desktop components, and they lag years behind in terms of speed and size. We have integrated algorithm-based fault tolerance (ABFT) methods into onboard data analysis algorithms to detect radiation-induced errors, which ultimately may permit the use of spacecraft memory that need not be fully hardened, reducing cost and increasing capability at the same time. We have also developed a lightweight software radiation simulator, BITFLIPS, that permits evaluation of error detection strategies in a controlled fashion, including the specification of the radiation rate and selective exposure of individual data structures. Using BITFLIPS, we evaluated our error detection methods when using a support vector machine to analyze data collected by the Mars Odyssey spacecraft. We observed good performance from both an existing ABFT method for matrix multiplication and a novel ABFT method for exponentiation. These techniques bring us a step closer to \"rad-hard\" machine learning algorithms.","PeriodicalId":422009,"journal":{"name":"2009 Third IEEE International Conference on Space Mission Challenges for Information Technology","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126228841","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Data intensive science is offering new challenges and opportunities for Information Technology and traditional relational databases in particular. Database file systems offer the potential to store Level Zero data and analyze Level 1 and Level 3 data within the same database system. Scientific data is typically composed of both unstructured files and scalar data. Oracle SecureFiles is a new database file system feature in Oracle Database 11g that is specifically engineered to deliver high performance and scalability for storing unstructured or file data inside the Oracle database. SecureFiles presents the best of both the file system and the database worlds for unstructured content. Data stored inside SecureFiles can be queried or written at performance levels comparable to that of traditional file systems while retaining the advantages of the Oracle database.
{"title":"A Case for Database Filesystems","authors":"P. A. Adams, John C. Hax","doi":"10.1109/SMC-IT.2009.28","DOIUrl":"https://doi.org/10.1109/SMC-IT.2009.28","url":null,"abstract":"Data intensive science is offering new challenges and opportunities for Information Technology and traditional relational databases in particular. Database file systems offer the potential to store Level Zero data and analyze Level 1 and Level 3 data within the same database system. Scientific data is typically composed of both unstructured files and scalar data. Oracle SecureFiles is a new database file system feature in Oracle Database 11g that is specifically engineered to deliver high performance and scalability for storing unstructured or file data inside the Oracle database. SecureFiles presents the best of both the file system and the database worlds for unstructured content. Data stored inside SecureFiles can be queried or written at performance levels comparable to that of traditional file systems while retaining the advantages of the Oracle database.","PeriodicalId":422009,"journal":{"name":"2009 Third IEEE International Conference on Space Mission Challenges for Information Technology","volume":"50 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126706755","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
C. Mattmann, D. Freeborn, D. Crichton, Brian M. Foster, Andrew F. Hart, D. Woollard, S. Hardman, P. Ramirez, S. Kelly, A. Y. Chang, C. E. Miller
We describe a reusable architecture and implementation framework for managing science processing pipelines for mission ground data systems. Our system, dubbed ``PCS'', for Process Control System, improves upon an existing software component, the OODT Catalog and Archive (CAS), which has already supported the QuikSCAT, SeaWinds and AMT earth science missions. This paper focuses on PCS within the context of two current earth science missions: the Orbiting Carbon Observatory (OCO), and NPP Sounder PEATE projects.
{"title":"A Reusable Process Control System Framework for the Orbiting Carbon Observatory and NPP Sounder PEATE Missions","authors":"C. Mattmann, D. Freeborn, D. Crichton, Brian M. Foster, Andrew F. Hart, D. Woollard, S. Hardman, P. Ramirez, S. Kelly, A. Y. Chang, C. E. Miller","doi":"10.1109/SMC-IT.2009.27","DOIUrl":"https://doi.org/10.1109/SMC-IT.2009.27","url":null,"abstract":"We describe a reusable architecture and implementation framework for managing science processing pipelines for mission ground data systems. Our system, dubbed ``PCS'', for Process Control System, improves upon an existing software component, the OODT Catalog and Archive (CAS), which has already supported the QuikSCAT, SeaWinds and AMT earth science missions. This paper focuses on PCS within the context of two current earth science missions: the Orbiting Carbon Observatory (OCO), and NPP Sounder PEATE projects.","PeriodicalId":422009,"journal":{"name":"2009 Third IEEE International Conference on Space Mission Challenges for Information Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115465182","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Solar and Heliospheric Observatory's (SOHO) mission operations team has - due to a spacecraft malfunction - had to deal with the Keyhole Periods problem since 2003. This combinatorial problem arises from the conflict between limited telemetry downlink capabilities during keyhole periods and the need to maximize the return of science data while respecting other constraints, such as limited on-board storage capacity. Until recently, this problem has been addressed by manually generating data production and downlink plans. This work presents the SOHO Keyhole Planner (SKEYP), a software tool designed to improve on the manual workflow for plan generation by shortening the time needed and reduce significance of the learning curve associated with this process.
{"title":"SKEYP: AI Applied to SOHO Keyhole Operations","authors":"N. Policella, H. Oliveira, T. Siili","doi":"10.1109/SMC-IT.2009.16","DOIUrl":"https://doi.org/10.1109/SMC-IT.2009.16","url":null,"abstract":"The Solar and Heliospheric Observatory's (SOHO) mission operations team has - due to a spacecraft malfunction - had to deal with the Keyhole Periods problem since 2003. This combinatorial problem arises from the conflict between limited telemetry downlink capabilities during keyhole periods and the need to maximize the return of science data while respecting other constraints, such as limited on-board storage capacity. Until recently, this problem has been addressed by manually generating data production and downlink plans. This work presents the SOHO Keyhole Planner (SKEYP), a software tool designed to improve on the manual workflow for plan generation by shortening the time needed and reduce significance of the learning curve associated with this process.","PeriodicalId":422009,"journal":{"name":"2009 Third IEEE International Conference on Space Mission Challenges for Information Technology","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132574838","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents the JPL-developed Sequential Principal Component Analysis (SPCA) algorithm for feature extraction / image compression, based on “dominant-term selection” unsupervised learning technique that requires an order-of-magnitude lesser computation and has simpler architecture compared to the state of the art gradient-descent techniques. This algorithm is inherently amenable to a compact, low power and high speed VLSI hardware embodiment. The paper compares the lossless image compression performance of the JPL’s SPCA algorithm with the state of the art JPEG2000, widely used due to its simplified hardware implementability. JPEG2000 is not an optimal data compression technique because of its fixed transform characteristics, regardless of its data structure. On the other hand, conventional Principal Component Analysis based transform (PCA-transform) is a data-dependent-structure transform. However, it is not easy to implement the PCA in compact VLSI hardware, due to its highly computational and architectural complexity. In contrast, the JPL’s “dominant-term selection” SPCA algorithm allows, for the first time, a compact, low-power hardware implementation of the powerful PCA algorithm. This paper presents a direct comparison of the JPL’s SPCA versus JPEG2000, incorporating the Huffman and arithmetic coding for completeness of the data compression operation. The simulation results show that JPL’s SPCA algorithm is superior as an optimal data-dependent-transform over the state of the art JPEG2000. When implemented in hardware, this technique is projected to be ideally suited to future NASA missions for autonomous on-board image data processing to improve the bandwidth of communication.
{"title":"Sequential Principal Component Analysis - A Hardware-Implementable Transform for Image Compression","authors":"T. Duong, Vu A. Duong","doi":"10.1109/SMC-IT.2009.49","DOIUrl":"https://doi.org/10.1109/SMC-IT.2009.49","url":null,"abstract":"This paper presents the JPL-developed Sequential Principal Component Analysis (SPCA) algorithm for feature extraction / image compression, based on “dominant-term selection” unsupervised learning technique that requires an order-of-magnitude lesser computation and has simpler architecture compared to the state of the art gradient-descent techniques. This algorithm is inherently amenable to a compact, low power and high speed VLSI hardware embodiment. The paper compares the lossless image compression performance of the JPL’s SPCA algorithm with the state of the art JPEG2000, widely used due to its simplified hardware implementability. JPEG2000 is not an optimal data compression technique because of its fixed transform characteristics, regardless of its data structure. On the other hand, conventional Principal Component Analysis based transform (PCA-transform) is a data-dependent-structure transform. However, it is not easy to implement the PCA in compact VLSI hardware, due to its highly computational and architectural complexity. In contrast, the JPL’s “dominant-term selection” SPCA algorithm allows, for the first time, a compact, low-power hardware implementation of the powerful PCA algorithm. This paper presents a direct comparison of the JPL’s SPCA versus JPEG2000, incorporating the Huffman and arithmetic coding for completeness of the data compression operation. The simulation results show that JPL’s SPCA algorithm is superior as an optimal data-dependent-transform over the state of the art JPEG2000. When implemented in hardware, this technique is projected to be ideally suited to future NASA missions for autonomous on-board image data processing to improve the bandwidth of communication.","PeriodicalId":422009,"journal":{"name":"2009 Third IEEE International Conference on Space Mission Challenges for Information Technology","volume":"87 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129495523","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Natalie Wiser-Orozco, K. Schubert, E. Gomez, R. Botting
The research conducted is intended to enhance the way we view and study our solar system and others, by allowing scientists of astronomy, physics, and computer science to accurately simulate celestial systems. The Extensible Simulator organizes individual groups of celestial bodies, calculates their positions, graphically visualizes their movement using the computed positions, and finally, is extensible, which serves to accommodate additional numerical methods and gravitational functions, body shapes and behaviors, and camera views.
{"title":"Extensible Simulation of Planets and Comets","authors":"Natalie Wiser-Orozco, K. Schubert, E. Gomez, R. Botting","doi":"10.1109/SMC-IT.2009.33","DOIUrl":"https://doi.org/10.1109/SMC-IT.2009.33","url":null,"abstract":"The research conducted is intended to enhance the way we view and study our solar system and others, by allowing scientists of astronomy, physics, and computer science to accurately simulate celestial systems. The Extensible Simulator organizes individual groups of celestial bodies, calculates their positions, graphically visualizes their movement using the computed positions, and finally, is extensible, which serves to accommodate additional numerical methods and gravitational functions, body shapes and behaviors, and camera views.","PeriodicalId":422009,"journal":{"name":"2009 Third IEEE International Conference on Space Mission Challenges for Information Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124394115","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper describes aspects of RAXEM2, an intelligent software system developed to support human mission planners in the daily task to plan uplink commands for the MARS EXPRESS mission at the European Space Agency. A first system called RAXEM has been in operational use at the ESA-ESOC mission control center since Summer 2007. During 2008 authors have been working to a new version of the tool to both incorporate new functionalities and capture the entire life-cycle of the plan uplink problem. RAXEM2 has been released for daily use on March 2009 replacing previous work practice. The new tool combines a flexible AI-based automated algorithm, a user-friendly interaction front-end, and several functionalities for information management, overall guaranteeing continuity of work practice. The paper touches upon various aspects of the new system and comments on how a key factor for success has been the integration of such different modules in a unique comprehensive support environment.
{"title":"Integrating Software Technologies in RAXEM2 - AI Meets Information Technology","authors":"G. Bernardi, A. Cesta, Gabriella Cortellessa","doi":"10.1109/SMC-IT.2009.14","DOIUrl":"https://doi.org/10.1109/SMC-IT.2009.14","url":null,"abstract":"This paper describes aspects of RAXEM2, an intelligent software system developed to support human mission planners in the daily task to plan uplink commands for the MARS EXPRESS mission at the European Space Agency. A first system called RAXEM has been in operational use at the ESA-ESOC mission control center since Summer 2007. During 2008 authors have been working to a new version of the tool to both incorporate new functionalities and capture the entire life-cycle of the plan uplink problem. RAXEM2 has been released for daily use on March 2009 replacing previous work practice. The new tool combines a flexible AI-based automated algorithm, a user-friendly interaction front-end, and several functionalities for information management, overall guaranteeing continuity of work practice. The paper touches upon various aspects of the new system and comments on how a key factor for success has been the integration of such different modules in a unique comprehensive support environment.","PeriodicalId":422009,"journal":{"name":"2009 Third IEEE International Conference on Space Mission Challenges for Information Technology","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116186554","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}