A. Morales-Reyes, Asmaa Al-Naqi, A. Erdogan, T. Arslan
Cellular Genetic Algorithms (cGAs) have shown their ability to solve not only difficult optimization problems, but also outperform centralized Genetic Algorithms (GAs) in terms of efficiency and efficacy. The study herein presented aims to analyze and compare 2D and 3D cellular GAs, while maintaining in general their configuration constraints such as population size, neighbourhood radius, local selection method, replacement polices, among others. A primary objective of this paper is to provide a wide insight into the advantages of increasing cellular dimensionality for future development of 3D adaptive optimization engine architectures.
{"title":"Towards 3D Architectures: A Comparative Study on Cellular GAs Dimensionality","authors":"A. Morales-Reyes, Asmaa Al-Naqi, A. Erdogan, T. Arslan","doi":"10.1109/AHS.2009.29","DOIUrl":"https://doi.org/10.1109/AHS.2009.29","url":null,"abstract":"Cellular Genetic Algorithms (cGAs) have shown their ability to solve not only difficult optimization problems, but also outperform centralized Genetic Algorithms (GAs) in terms of efficiency and efficacy. The study herein presented aims to analyze and compare 2D and 3D cellular GAs, while maintaining in general their configuration constraints such as population size, neighbourhood radius, local selection method, replacement polices, among others. A primary objective of this paper is to provide a wide insight into the advantages of increasing cellular dimensionality for future development of 3D adaptive optimization engine architectures.","PeriodicalId":318989,"journal":{"name":"2009 NASA/ESA Conference on Adaptive Hardware and Systems","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131691390","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Upcoming NASA cosmology survey missions, such as Joint Dark Energy Mission (JDEM), carry instruments with multiple focal planes populated with many large sensor detector arrays. These sensors are passively cooled to low temperatures for low-level light and near-infrared (NIR) signal detection, and the sensor readout electronics circuitry must perform at extremely low noise levels to enable new required science measurements. Because we are at the technological edge of enhanced performance for sensors and readout electronics circuitry, as determined by thermal noise level at given temperature in analog domain, we must find new ways of further compensating for the noise in the signal digital domain. To facilitate this new approach, state-of-the-art sensors are augmented at their array hardware boundaries by non-illuminated or non-sensitive to photons reference pixels, which can be used to reduce noise attributed to sensor and readout electronics. There are a few proposed methodologies of processing in the digital domain the information carried by reference pixels. These methods involve using spatial and temporal global statistical scalar parameters derived from boundary reference pixel information to enhance the active pixels’ signals. To make a step beyond this heritage methodology, we apply the NASA-developed technology known as the Hilbert-Huang Transform Data Processing System (HHT-DPS) to some component of reference pixel vectors’ information. This allows to derive a noise correction array, which, in addition to the statistical parameter over the signal trend, is applied to the active pixel array.
{"title":"New Methodology for Reducing Sensor and Readout Electronics Circuitry Noise in Digital Domain","authors":"S. Kizhner, Katherine Heinzen","doi":"10.1109/AHS.2009.15","DOIUrl":"https://doi.org/10.1109/AHS.2009.15","url":null,"abstract":"Upcoming NASA cosmology survey missions, such as Joint Dark Energy Mission (JDEM), carry instruments with multiple focal planes populated with many large sensor detector arrays. These sensors are passively cooled to low temperatures for low-level light and near-infrared (NIR) signal detection, and the sensor readout electronics circuitry must perform at extremely low noise levels to enable new required science measurements. Because we are at the technological edge of enhanced performance for sensors and readout electronics circuitry, as determined by thermal noise level at given temperature in analog domain, we must find new ways of further compensating for the noise in the signal digital domain. To facilitate this new approach, state-of-the-art sensors are augmented at their array hardware boundaries by non-illuminated or non-sensitive to photons reference pixels, which can be used to reduce noise attributed to sensor and readout electronics. There are a few proposed methodologies of processing in the digital domain the information carried by reference pixels. These methods involve using spatial and temporal global statistical scalar parameters derived from boundary reference pixel information to enhance the active pixels’ signals. To make a step beyond this heritage methodology, we apply the NASA-developed technology known as the Hilbert-Huang Transform Data Processing System (HHT-DPS) to some component of reference pixel vectors’ information. This allows to derive a noise correction array, which, in addition to the statistical parameter over the signal trend, is applied to the active pixel array.","PeriodicalId":318989,"journal":{"name":"2009 NASA/ESA Conference on Adaptive Hardware and Systems","volume":"123 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134572289","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present parallel implementations of a multilayer perceptron that uses reduced variable bit width hardware to improve resource utilization while still providing known levels of accuracy. We show results for a chemical classification application and introduce ways in which to take advantage of the capabilities of a reconfigurable device. We show how the optimized circuit can be used synergistically in parallel with other classifiers for added capability and alone for fault tolerance and saving power.
{"title":"Synergistic Reconfiguration of Adaptive Precision Chemical Classifiers","authors":"Michael Gilberti, A. Doboli","doi":"10.1109/AHS.2009.49","DOIUrl":"https://doi.org/10.1109/AHS.2009.49","url":null,"abstract":"We present parallel implementations of a multilayer perceptron that uses reduced variable bit width hardware to improve resource utilization while still providing known levels of accuracy. We show results for a chemical classification application and introduce ways in which to take advantage of the capabilities of a reconfigurable device. We show how the optimized circuit can be used synergistically in parallel with other classifiers for added capability and alone for fault tolerance and saving power.","PeriodicalId":318989,"journal":{"name":"2009 NASA/ESA Conference on Adaptive Hardware and Systems","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130120942","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Karel H. G. Walters, A. Kokkeler, S. H. Gerez, G. Smit
The increasing amount of data produced in satellites poses a downlink communication problem due to the limited data rate of the downlink. This bottleneck is solved by introducing more and more processing power on-board to compress data to a satisfiable rate. This paper introduces an algorithm which has been developed to compress hyperspectral images at low complexity and describes its mapping to a new hardware platform called the Xentium. It is characterized by both high flexibility as well as high processing power. After introducing the algorithm the Xentium hardware is described. The different mapping strategies are explained and a cycle estimation is derived. It turns out that the compression algorithm can indeed be efficiently mapped on a reconfigurable tile like the Xentium. An image of 1024X1024 with 50 bands can be compressed in about 4 seconds on a single tile. Adding more tiles gives a close to linear speedup.
{"title":"Low-Complexity Hyperspectral Image Compression on a Multi-tiled Architecture","authors":"Karel H. G. Walters, A. Kokkeler, S. H. Gerez, G. Smit","doi":"10.1109/AHS.2009.28","DOIUrl":"https://doi.org/10.1109/AHS.2009.28","url":null,"abstract":"The increasing amount of data produced in satellites poses a downlink communication problem due to the limited data rate of the downlink. This bottleneck is solved by introducing more and more processing power on-board to compress data to a satisfiable rate. This paper introduces an algorithm which has been developed to compress hyperspectral images at low complexity and describes its mapping to a new hardware platform called the Xentium. It is characterized by both high flexibility as well as high processing power. After introducing the algorithm the Xentium hardware is described. The different mapping strategies are explained and a cycle estimation is derived. It turns out that the compression algorithm can indeed be efficiently mapped on a reconfigurable tile like the Xentium. An image of 1024X1024 with 50 bands can be compressed in about 4 seconds on a single tile. Adding more tiles gives a close to linear speedup.","PeriodicalId":318989,"journal":{"name":"2009 NASA/ESA Conference on Adaptive Hardware and Systems","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130276587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Y. Shiyanovskii, F. Wolff, C. Papachristou, D. Weyer
Self-reconfigurable hardware is a new emerging technology which will enable adaptation of computing systems to changing environments.This paper deals with the design of architecture kernels for an autonomous on-board system and the development of an adaptation manager for real-time scheduling of the reconfigurable hardware fabric.Our approach employs a reconfigurable computer architecture with two key layers: the adaptation manager and the real time configuration kernel. This provides significant advantages in terms of flexibility, scalability, cost, and compatibility with embedded technology. Some preliminary results are presented.
{"title":"An Adaptable Task Manager for Reconfigurable Architecture Kernels","authors":"Y. Shiyanovskii, F. Wolff, C. Papachristou, D. Weyer","doi":"10.1109/AHS.2009.65","DOIUrl":"https://doi.org/10.1109/AHS.2009.65","url":null,"abstract":"Self-reconfigurable hardware is a new emerging technology which will enable adaptation of computing systems to changing environments.This paper deals with the design of architecture kernels for an autonomous on-board system and the development of an adaptation manager for real-time scheduling of the reconfigurable hardware fabric.Our approach employs a reconfigurable computer architecture with two key layers: the adaptation manager and the real time configuration kernel. This provides significant advantages in terms of flexibility, scalability, cost, and compatibility with embedded technology. Some preliminary results are presented.","PeriodicalId":318989,"journal":{"name":"2009 NASA/ESA Conference on Adaptive Hardware and Systems","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128237298","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents the concept of a biological inspired reconfigurable hardware cell architecture which supports self-organisation and self-healing. Two fundamental processes in biology, namely fertilization-to-birth and cell self-healing have inspired the development of this cell architecture. In biology as well as in our hardware cell architecture it is the DNA which enables these processes. We propose a platform based on the electronic DNA (eDNA) and show through simulation, its capabilities as a new generation of robust reconfigurable hardware platforms. We have created a Java based simulator to simulate our self-organisation and self-healing algorithms and the results obtained from this looks promising.
{"title":"eDNA: A Bio-Inspired Reconfigurable Hardware Cell Architecture Supporting Self-organisation and Self-healing","authors":"M. Boesen, J. Madsen","doi":"10.1109/AHS.2009.22","DOIUrl":"https://doi.org/10.1109/AHS.2009.22","url":null,"abstract":"This paper presents the concept of a biological inspired reconfigurable hardware cell architecture which supports self-organisation and self-healing. Two fundamental processes in biology, namely fertilization-to-birth and cell self-healing have inspired the development of this cell architecture. In biology as well as in our hardware cell architecture it is the DNA which enables these processes. We propose a platform based on the electronic DNA (eDNA) and show through simulation, its capabilities as a new generation of robust reconfigurable hardware platforms. We have created a Java based simulator to simulate our self-organisation and self-healing algorithms and the results obtained from this looks promising.","PeriodicalId":318989,"journal":{"name":"2009 NASA/ESA Conference on Adaptive Hardware and Systems","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130148302","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Majority of research in wireless ad-hoc networks is based on software tools simulating network environment under strictly controlled conditions, mainly due to its extreme cost, difficulty of adapting real-time topological changes in the environment and complexity of implementing a realistic testbed. In this paper, we present a testbed with real wireless task-oriented autonomous MANET based on VxWorks RTOS platform using Xilinx ML310 development boards with Virtex-II Pro FPGA devices and integrated gumstix/iRobot platform running embedded linux, as well as off-the-shelf laptops and desktops. As an example experiment, we consider the task of uniformly covering an unknown geographical terrain using autonomous MANET nodes with a limited communication range, which has many military missions such as search and rescue missions, surveillance tasks, locating and mapping chemical, and biological hazards. To achieve this objective, mobile nodes exchange one-hop neighbor information to decide their speed and directions without any central coordinator. Each node runs a genetic algorithm (GA) to select fitter speed and direction among an exponentially large number of choices for a better convergence toward a uniform distribution. The testbed experiments provide an effective research tool to demonstrate that our GA delivers acceptable network area coverage.
大多数无线自组网的研究都是基于软件工具来模拟严格控制条件下的网络环境,这主要是由于其成本极高,难以适应环境的实时拓扑变化以及实现现实测试平台的复杂性。在本文中,我们提出了一个基于VxWorks RTOS平台的真实无线面向任务自治MANET的测试平台,使用Xilinx ML310开发板与Virtex-II Pro FPGA器件和集成的gumstix/iRobot平台运行嵌入式linux,以及现成的笔记本电脑和台式机。作为一个示例实验,我们考虑使用具有有限通信范围的自主MANET节点统一覆盖未知地理地形的任务,该任务具有许多军事任务,如搜索和救援任务,监视任务,定位和绘制化学和生物危害。为了实现这一目标,移动节点在没有任何中心协调器的情况下通过交换一跳邻居信息来决定它们的速度和方向。每个节点运行遗传算法(GA),从指数级的选择中选择更合适的速度和方向,以便更好地收敛到均匀分布。试验台实验提供了一个有效的研究工具来证明我们的遗传算法提供了可接受的网络区域覆盖。
{"title":"Testbed for Node Communication in MANETs to Uniformly Cover Unknown Geographical Terrain Using Genetic Algorithms","authors":"C. Dogan, C. Sahin, M. U. Uyar, E. Urrea","doi":"10.1109/AHS.2009.38","DOIUrl":"https://doi.org/10.1109/AHS.2009.38","url":null,"abstract":"Majority of research in wireless ad-hoc networks is based on software tools simulating network environment under strictly controlled conditions, mainly due to its extreme cost, difficulty of adapting real-time topological changes in the environment and complexity of implementing a realistic testbed. In this paper, we present a testbed with real wireless task-oriented autonomous MANET based on VxWorks RTOS platform using Xilinx ML310 development boards with Virtex-II Pro FPGA devices and integrated gumstix/iRobot platform running embedded linux, as well as off-the-shelf laptops and desktops. As an example experiment, we consider the task of uniformly covering an unknown geographical terrain using autonomous MANET nodes with a limited communication range, which has many military missions such as search and rescue missions, surveillance tasks, locating and mapping chemical, and biological hazards. To achieve this objective, mobile nodes exchange one-hop neighbor information to decide their speed and directions without any central coordinator. Each node runs a genetic algorithm (GA) to select fitter speed and direction among an exponentially large number of choices for a better convergence toward a uniform distribution. The testbed experiments provide an effective research tool to demonstrate that our GA delivers acceptable network area coverage.","PeriodicalId":318989,"journal":{"name":"2009 NASA/ESA Conference on Adaptive Hardware and Systems","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132674126","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
One of the main challenges in building reconfigurable asynchronous architectures is the design of the reconfigurable interconnect scheme. An asynchronous channel connecting a sender to multiple receivers cannot be split or shared between the receivers without additional complex circuitry to acknowledge every transition on the channel. The technique used in existing asynchronous reconfigurable architectures involves designing the interconnect scheme so that all the tokens have unique senders and receivers; tokens needed by more than one block must first be duplicated. At the duplication stage, the data and request signals of a sender are copied to all receivers. For any given configuration, the resulting acknowledge signals must be synchronised so the sender receives an acknowledge only when the receivers involved in communication have acknowledged its request.In this paper we present a novel method for conditional acknowledge synchronisation in asynchronous interconnects. Compared to the commonly employed synchronising technique, our method results in the design of reconfigurable interconnects which are smaller in area, require less configuration bits, and consume less power. For a sample island-style interconnect, our designs showed a 25% reduction in configuration bits, up to 47% reduction in area and up to 45% reduction in power consumption over equivalent interconnects designed using the traditionally used synchronisation technique.
{"title":"Conditional Acknowledge Synchronisation in Asynchronous Interconnect Switch Design","authors":"Khodor Ahmad Fawaz, T. Arslan, Iain A. B. Lindsay","doi":"10.1109/AHS.2009.57","DOIUrl":"https://doi.org/10.1109/AHS.2009.57","url":null,"abstract":"One of the main challenges in building reconfigurable asynchronous architectures is the design of the reconfigurable interconnect scheme. An asynchronous channel connecting a sender to multiple receivers cannot be split or shared between the receivers without additional complex circuitry to acknowledge every transition on the channel. The technique used in existing asynchronous reconfigurable architectures involves designing the interconnect scheme so that all the tokens have unique senders and receivers; tokens needed by more than one block must first be duplicated. At the duplication stage, the data and request signals of a sender are copied to all receivers. For any given configuration, the resulting acknowledge signals must be synchronised so the sender receives an acknowledge only when the receivers involved in communication have acknowledged its request.In this paper we present a novel method for conditional acknowledge synchronisation in asynchronous interconnects. Compared to the commonly employed synchronising technique, our method results in the design of reconfigurable interconnects which are smaller in area, require less configuration bits, and consume less power. For a sample island-style interconnect, our designs showed a 25% reduction in configuration bits, up to 47% reduction in area and up to 45% reduction in power consumption over equivalent interconnects designed using the traditionally used synchronisation technique.","PeriodicalId":318989,"journal":{"name":"2009 NASA/ESA Conference on Adaptive Hardware and Systems","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115894223","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper explores the biometric identification and verification of human subjects via fingerprints utilising an adaptive FPGA-based weightless neural networks. The exploration espoused here is a hardware-based system motivated by the need for accurate and rapid response to identification of fingerprints which may be lacking in other alternative systems such as software based neural networks. The fingerprints are pre-processed and binarized, and the binarized fingerprints are partitioned into train- and test-set for the FPGA-based neural network. The neural network employed in this exploration is known as Enhanced Convergent Network (EPCN). The results obtained are compared to other alternative systems. They demonstrate the suitability of the FPGA-based EPCN for such tasks.
{"title":"A Fingerprint Identification System Using Adaptive FPGA-Based Enhanced Probabilistic Convergent Network","authors":"Pierre Lorrentz, W. Howells, K. Mcdonald-Maier","doi":"10.1109/AHS.2009.8","DOIUrl":"https://doi.org/10.1109/AHS.2009.8","url":null,"abstract":"This paper explores the biometric identification and verification of human subjects via fingerprints utilising an adaptive FPGA-based weightless neural networks. The exploration espoused here is a hardware-based system motivated by the need for accurate and rapid response to identification of fingerprints which may be lacking in other alternative systems such as software based neural networks. The fingerprints are pre-processed and binarized, and the binarized fingerprints are partitioned into train- and test-set for the FPGA-based neural network. The neural network employed in this exploration is known as Enhanced Convergent Network (EPCN). The results obtained are compared to other alternative systems. They demonstrate the suitability of the FPGA-based EPCN for such tasks.","PeriodicalId":318989,"journal":{"name":"2009 NASA/ESA Conference on Adaptive Hardware and Systems","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132075983","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Efficient on-board lossless hyperspectral data compression reduces data volume in order to meet NASA and DoD limited downlink capabilities. The technique also improves signature extraction, object recognition and feature classification capabilities by providing exact reconstructed data on constrained downlink resources. At JPL a novel, adaptive and predictive technique for lossless compression of hyperspectral data was recently developed. This technique uses an adaptive filtering method and achieves a combination of low complexity and compression effectiveness that far exceeds state-of-the-art techniques currently in use. The JPL-developed ‘Fast Lossless’ algorithm requires no training data or other specific information about the nature of the spectral bands for a fixed instrument dynamic range. It is of low computational complexity and thus well-suited for implementation in hardware. It was modified for pushbroom instruments and makes it practical for flight implementations. A prototype of the compressor (and decompressor) of the algorithm is available in software, but this implementation may not meet speed and real-time requirements of some space applications. Hardware acceleration provides performance improvements of 10x-100x vs. the software implementation (about 1M samples/sec on a Pentium IV machine). This paper describes a hardware implementation of the ‘Modified Fast Lossless’ compression algorithm for pushbroom instruments on a Field Programmable Gate Array (FPGA). The FPGA implementation targets the current state-of-the-art FPGAs (Xilinx Virtex IV and V families) and compresses one sample every clock cycle to provide a fast and practical real-time solution for Space applications.
{"title":"Hardware Implementation of Lossless Adaptive and Scalable Hyperspectral Data Compression for Space","authors":"N. Aranki, D. Keymeulen, A. Bakhshi, M. Klimesh","doi":"10.1109/AHS.2009.66","DOIUrl":"https://doi.org/10.1109/AHS.2009.66","url":null,"abstract":"Efficient on-board lossless hyperspectral data compression reduces data volume in order to meet NASA and DoD limited downlink capabilities. The technique also improves signature extraction, object recognition and feature classification capabilities by providing exact reconstructed data on constrained downlink resources. At JPL a novel, adaptive and predictive technique for lossless compression of hyperspectral data was recently developed. This technique uses an adaptive filtering method and achieves a combination of low complexity and compression effectiveness that far exceeds state-of-the-art techniques currently in use. The JPL-developed ‘Fast Lossless’ algorithm requires no training data or other specific information about the nature of the spectral bands for a fixed instrument dynamic range. It is of low computational complexity and thus well-suited for implementation in hardware. It was modified for pushbroom instruments and makes it practical for flight implementations. A prototype of the compressor (and decompressor) of the algorithm is available in software, but this implementation may not meet speed and real-time requirements of some space applications. Hardware acceleration provides performance improvements of 10x-100x vs. the software implementation (about 1M samples/sec on a Pentium IV machine). This paper describes a hardware implementation of the ‘Modified Fast Lossless’ compression algorithm for pushbroom instruments on a Field Programmable Gate Array (FPGA). The FPGA implementation targets the current state-of-the-art FPGAs (Xilinx Virtex IV and V families) and compresses one sample every clock cycle to provide a fast and practical real-time solution for Space applications.","PeriodicalId":318989,"journal":{"name":"2009 NASA/ESA Conference on Adaptive Hardware and Systems","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129920080","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}