Pub Date : 2019-06-01DOI: 10.1109/CCAAW.2019.8904899
C. Yakopcic, Nayim Rahman, Tanvir Atahary, Md. Zahangir Alom, T. Taha, Alex Beigh, Scott Douglass
Asset allocation is a compute intensive combinatorial optimization problem commonly tasked to autonomous decision making systems. However, cognitive agents interact in real time with their environment and are generally heavily power constrained. Thus, there is strong need for a real time asset allocation agent running on a low power computing platform to ensure efficiency and portability. As an alternative to traditional techniques, work presented in this paper describes how spiking neuron algorithms can be used to carry out asset allocation. We show that a significant reduction in computation time can be gained if the user is willing to accept a near optimal solution using our spiking neuron approach. As of late, specialized neuromorphic spiking processors have demonstrated a dramatic reduction in power consumption relative to traditional processing techniques for certain applications. Improved efficiencies are primarily due to unique algorithmic processing that produces a reduction in data movement and an increase in parallel computation. In this work, we use the TrueNorth spiking neural network processor to implement our asset allocation algorithm. With an operating power of approximately 50 mW, we show the feasibility of performing portable low-power task allocation on a spiking neuromorphic processor.
{"title":"Spiking Neural Network for Asset Allocation Implemented Using the TrueNorth System","authors":"C. Yakopcic, Nayim Rahman, Tanvir Atahary, Md. Zahangir Alom, T. Taha, Alex Beigh, Scott Douglass","doi":"10.1109/CCAAW.2019.8904899","DOIUrl":"https://doi.org/10.1109/CCAAW.2019.8904899","url":null,"abstract":"Asset allocation is a compute intensive combinatorial optimization problem commonly tasked to autonomous decision making systems. However, cognitive agents interact in real time with their environment and are generally heavily power constrained. Thus, there is strong need for a real time asset allocation agent running on a low power computing platform to ensure efficiency and portability. As an alternative to traditional techniques, work presented in this paper describes how spiking neuron algorithms can be used to carry out asset allocation. We show that a significant reduction in computation time can be gained if the user is willing to accept a near optimal solution using our spiking neuron approach. As of late, specialized neuromorphic spiking processors have demonstrated a dramatic reduction in power consumption relative to traditional processing techniques for certain applications. Improved efficiencies are primarily due to unique algorithmic processing that produces a reduction in data movement and an increase in parallel computation. In this work, we use the TrueNorth spiking neural network processor to implement our asset allocation algorithm. With an operating power of approximately 50 mW, we show the feasibility of performing portable low-power task allocation on a spiking neuromorphic processor.","PeriodicalId":196580,"journal":{"name":"2019 IEEE Cognitive Communications for Aerospace Applications Workshop (CCAAW)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124018977","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-06-01DOI: 10.1109/CCAAW.2019.8904903
E. Altland, Julia Mahon Kuzin, Ali Mohammadian, A. S. Abdalla, William C. Headley, Alan J. Michaels, Jonathan Castellanos, Joshua Detwiler, Paolo Fermin, Raquel Ferrá, Conor Kelly, Casey Latoski, Tiffany Ma, Thomas Maher
Advances in machine learning applications for image processing, natural language processing, and direct ingestion of radio frequency signals continue to accelerate. Less attention, however, has been paid to the resilience of these machine learning algorithms when implemented on real hardware and subjected to unintentional and/or malicious errors during execution, such as those occurring from space-based single event upsets (SEU). This paper presents a series of results quantifying the rate and level of performance degradation that occurs when convolutional neural nets (CNNs) are subjected to selected bit errors in single-precision number representations. This paper provides results that are conditioned upon ten different error case events to isolate the impacts showing that CNN performance can be gradually degraded or reduced to random guessing based on where errors arise. The degradations are then translated into expected operational lifetimes for each of four CNNs when deployed to space radiation environments. The discussion also provides a foundation for ongoing research that enhances the overall resilience of neural net architectures and implementations in space under both random and malicious error events, offering significant improvements over current implementations. Future work to extend these CNN resilience evaluations, conditioned upon architectural design elements and well-known error correction methods, is also introduced.
{"title":"Quantifying Degradations of Convolutional Neural Networks in Space Environments","authors":"E. Altland, Julia Mahon Kuzin, Ali Mohammadian, A. S. Abdalla, William C. Headley, Alan J. Michaels, Jonathan Castellanos, Joshua Detwiler, Paolo Fermin, Raquel Ferrá, Conor Kelly, Casey Latoski, Tiffany Ma, Thomas Maher","doi":"10.1109/CCAAW.2019.8904903","DOIUrl":"https://doi.org/10.1109/CCAAW.2019.8904903","url":null,"abstract":"Advances in machine learning applications for image processing, natural language processing, and direct ingestion of radio frequency signals continue to accelerate. Less attention, however, has been paid to the resilience of these machine learning algorithms when implemented on real hardware and subjected to unintentional and/or malicious errors during execution, such as those occurring from space-based single event upsets (SEU). This paper presents a series of results quantifying the rate and level of performance degradation that occurs when convolutional neural nets (CNNs) are subjected to selected bit errors in single-precision number representations. This paper provides results that are conditioned upon ten different error case events to isolate the impacts showing that CNN performance can be gradually degraded or reduced to random guessing based on where errors arise. The degradations are then translated into expected operational lifetimes for each of four CNNs when deployed to space radiation environments. The discussion also provides a foundation for ongoing research that enhances the overall resilience of neural net architectures and implementations in space under both random and malicious error events, offering significant improvements over current implementations. Future work to extend these CNN resilience evaluations, conditioned upon architectural design elements and well-known error correction methods, is also introduced.","PeriodicalId":196580,"journal":{"name":"2019 IEEE Cognitive Communications for Aerospace Applications Workshop (CCAAW)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122715268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-06-01DOI: 10.1109/CCAAW.2019.8904888
Rigoberto Roche', J. Downey, Mick V. Koch
This study presents the application of machine learning (ML) to a space-to-ground communication link, showing how ML can be used to detect the presence of detrimental channel fading. Using this channel state information, the communication link can be used more efficiently by reducing the amount of lost data during fading. The motivation for this work is based on channel fading observed during on-orbit operations with NASA's Space Communication and Navigation (SCaN) testbed on the International Space Station (ISS). This paper presents the process to extract a target concept (fading and not-fading) from the raw data. The preprocessing and data exploration effort is explained in detail, with a list of assumptions made for parsing and labelling the dataset. The model selection process is explained, specifically emphasizing the benefits of using an ensemble of algorithms with majority voting for binary classification of the channel state. Experimental results are shown, highlighting how an end-to-end communication system can utilize knowledge of the channel fading status to identity fading and take appropriate action. With a laboratory testbed to emulate channel fading, the overall performance is compared to standard adaptive methods without fading knowledge, such as adaptive coding and modulation.
{"title":"State Predictor of Classification Cognitive Engine Applied to Channel Fading","authors":"Rigoberto Roche', J. Downey, Mick V. Koch","doi":"10.1109/CCAAW.2019.8904888","DOIUrl":"https://doi.org/10.1109/CCAAW.2019.8904888","url":null,"abstract":"This study presents the application of machine learning (ML) to a space-to-ground communication link, showing how ML can be used to detect the presence of detrimental channel fading. Using this channel state information, the communication link can be used more efficiently by reducing the amount of lost data during fading. The motivation for this work is based on channel fading observed during on-orbit operations with NASA's Space Communication and Navigation (SCaN) testbed on the International Space Station (ISS). This paper presents the process to extract a target concept (fading and not-fading) from the raw data. The preprocessing and data exploration effort is explained in detail, with a list of assumptions made for parsing and labelling the dataset. The model selection process is explained, specifically emphasizing the benefits of using an ensemble of algorithms with majority voting for binary classification of the channel state. Experimental results are shown, highlighting how an end-to-end communication system can utilize knowledge of the channel fading status to identity fading and take appropriate action. With a laboratory testbed to emulate channel fading, the overall performance is compared to standard adaptive methods without fading knowledge, such as adaptive coding and modulation.","PeriodicalId":196580,"journal":{"name":"2019 IEEE Cognitive Communications for Aerospace Applications Workshop (CCAAW)","volume":"32 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123275475","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-01-31DOI: 10.1109/CCAAW.2019.8904904
G. Mendis, Jin Wei, A. Madanayake, S. Mandal
Due to the advances of artificial intelligence, machine learning techniques have been applied for spectrum sensing and modulation recognition. However, there still remain essential challenges in wideband spectrum sensing. Signal processing in the wideband spectrum is computationally expensive. Additionally, it is highly possible that only a small portion of the wideband spectrum information contain useful features for the targeted application. Therefore, to achieve an effective tradeoff between the low computational complexity and the high spectrum-sensing accuracy, a spectral attention-driven reinforcement learning based intelligent method is developed for effective and efficient detection of event-driven target signals in a wideband spectrum. As the first stage to achieve this goal, it is assumed that the modulation technique used is available as a prior knowledge of the targeted important signal. The proposed spectral attention-driven intelligent method consists of two main components, a spectral correlation function (SCF) based spectral visualization scheme and a spectral attention-driven reinforcement learning mechanism that adaptively selects the spectrum range and implements the intelligent signal detection. Simulations illustrate that because of the effectively selecting the spectrum ranges to be observed, the proposed method can achieve > 90% accuracy of signal detection while observation of spectrum and calculation of SCF is limited to 5 out of 64 of spectrum locations.
{"title":"Spectral Attention-Driven Intelligent Target Signal Identification on a Wideband Spectrum","authors":"G. Mendis, Jin Wei, A. Madanayake, S. Mandal","doi":"10.1109/CCAAW.2019.8904904","DOIUrl":"https://doi.org/10.1109/CCAAW.2019.8904904","url":null,"abstract":"Due to the advances of artificial intelligence, machine learning techniques have been applied for spectrum sensing and modulation recognition. However, there still remain essential challenges in wideband spectrum sensing. Signal processing in the wideband spectrum is computationally expensive. Additionally, it is highly possible that only a small portion of the wideband spectrum information contain useful features for the targeted application. Therefore, to achieve an effective tradeoff between the low computational complexity and the high spectrum-sensing accuracy, a spectral attention-driven reinforcement learning based intelligent method is developed for effective and efficient detection of event-driven target signals in a wideband spectrum. As the first stage to achieve this goal, it is assumed that the modulation technique used is available as a prior knowledge of the targeted important signal. The proposed spectral attention-driven intelligent method consists of two main components, a spectral correlation function (SCF) based spectral visualization scheme and a spectral attention-driven reinforcement learning mechanism that adaptively selects the spectrum range and implements the intelligent signal detection. Simulations illustrate that because of the effectively selecting the spectrum ranges to be observed, the proposed method can achieve > 90% accuracy of signal detection while observation of spectrum and calculation of SCF is limited to 5 out of 64 of spectrum locations.","PeriodicalId":196580,"journal":{"name":"2019 IEEE Cognitive Communications for Aerospace Applications Workshop (CCAAW)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129943005","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}