Pub Date : 2019-06-01DOI: 10.1109/CCAAW.2019.8904897
Patrick Barnes, R. Murawski
Artificial intelligence (AI) and machine learning (ML) have been growing at an incredible rate in recent years and they show no sign of stopping. Manufacturing, educational systems, transportation architecture, and genetic research are industries where artificial intelligence algorithms have been developed and found practical applications in which they can increase task efficiency and reduce cost through process optimization, pattern recognition, and automation. At NASA, one of the goals of the cognitive communications project has been to find applications for such algorithms to next-generation communication systems. The goal of this effort is to identify areas and approaches to intelligent system design and implementation which could allow NASA to support a larger space-and ground-based network while simultaneously reducing the operational costs involved with maintaining such a system This paper will evaluate the state of various approaches by searching for algorithms which are feasible to deploy directly onto future space systems with improved processing requirements. We begin by describing a set of heuristics through which algorithms may be compared, emphasizing memory and computational requirements, and heuristic bounds. We then evaluate general-purpose processing platforms onto which such algorithms may be deployed. We also evaluate how such systems may be packaged so as to offer a deterministic set of performance and decision metrics, to make the devices easier for system designers to include in present and future systems. We conclude the paper with a discussion of our findings, as well as where and how this study might continue in the future.
{"title":"Machine Learning and Optimization for Resource-Constrained Platforms","authors":"Patrick Barnes, R. Murawski","doi":"10.1109/CCAAW.2019.8904897","DOIUrl":"https://doi.org/10.1109/CCAAW.2019.8904897","url":null,"abstract":"Artificial intelligence (AI) and machine learning (ML) have been growing at an incredible rate in recent years and they show no sign of stopping. Manufacturing, educational systems, transportation architecture, and genetic research are industries where artificial intelligence algorithms have been developed and found practical applications in which they can increase task efficiency and reduce cost through process optimization, pattern recognition, and automation. At NASA, one of the goals of the cognitive communications project has been to find applications for such algorithms to next-generation communication systems. The goal of this effort is to identify areas and approaches to intelligent system design and implementation which could allow NASA to support a larger space-and ground-based network while simultaneously reducing the operational costs involved with maintaining such a system This paper will evaluate the state of various approaches by searching for algorithms which are feasible to deploy directly onto future space systems with improved processing requirements. We begin by describing a set of heuristics through which algorithms may be compared, emphasizing memory and computational requirements, and heuristic bounds. We then evaluate general-purpose processing platforms onto which such algorithms may be deployed. We also evaluate how such systems may be packaged so as to offer a deterministic set of performance and decision metrics, to make the devices easier for system designers to include in present and future systems. We conclude the paper with a discussion of our findings, as well as where and how this study might continue in the future.","PeriodicalId":196580,"journal":{"name":"2019 IEEE Cognitive Communications for Aerospace Applications Workshop (CCAAW)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123513537","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-06-01DOI: 10.1109/CCAAW.2019.8904915
Zhengjia Xu, Petrunin Ivan, Teng Li, A. Tsourdos
The aeronautical spectrum becomes increasingly congested due to raising number of non-stationary users, such as unmanned aerial vehicles (UAVs). With the growing demand to spectrum capacity, cognitive radio technology is a promising solution to maximize the utilization of spectrum by enabling communication of secondary users (SUs) without interfering with primary users (PUs). In this paper we formulate and solve a multi-parametric objective function for proactive handoff scheme in multiple input multiple output (MIMO) system constrained by QoS requirements. To improve the efficiency of handoff scheme for multiple communicating UAVs the greedy strategy is adopted. An innovative aspect of our solution includes consideration of quality of service (QoS) components, e.g. opportunistic service time, channel quality, etc. Some of these components, for example collision probability and false alarm probability, affect QoS in a negative way and are considered as constraints. Simulation of handoff scheme has been performed to evaluate the performance of the proposed algorithm in selecting multiple channels when the spectrum environment changes. The performance of handoff scheme is compared with random selection method and is found outperforming the random selection method in terms of averaged utilization ratio. Analysis of results has shown that the spectrum utilization ratio can be doubled by considering wider bandwidth (more channels) and by making QoS requirements less strict. In both cases this leads to near-linear increase in time consumption for handoff scheme generation.
{"title":"Greedy Based Proactive Spectrum Handoff Scheme for Cognitive Radio Systems","authors":"Zhengjia Xu, Petrunin Ivan, Teng Li, A. Tsourdos","doi":"10.1109/CCAAW.2019.8904915","DOIUrl":"https://doi.org/10.1109/CCAAW.2019.8904915","url":null,"abstract":"The aeronautical spectrum becomes increasingly congested due to raising number of non-stationary users, such as unmanned aerial vehicles (UAVs). With the growing demand to spectrum capacity, cognitive radio technology is a promising solution to maximize the utilization of spectrum by enabling communication of secondary users (SUs) without interfering with primary users (PUs). In this paper we formulate and solve a multi-parametric objective function for proactive handoff scheme in multiple input multiple output (MIMO) system constrained by QoS requirements. To improve the efficiency of handoff scheme for multiple communicating UAVs the greedy strategy is adopted. An innovative aspect of our solution includes consideration of quality of service (QoS) components, e.g. opportunistic service time, channel quality, etc. Some of these components, for example collision probability and false alarm probability, affect QoS in a negative way and are considered as constraints. Simulation of handoff scheme has been performed to evaluate the performance of the proposed algorithm in selecting multiple channels when the spectrum environment changes. The performance of handoff scheme is compared with random selection method and is found outperforming the random selection method in terms of averaged utilization ratio. Analysis of results has shown that the spectrum utilization ratio can be doubled by considering wider bandwidth (more channels) and by making QoS requirements less strict. In both cases this leads to near-linear increase in time consumption for handoff scheme generation.","PeriodicalId":196580,"journal":{"name":"2019 IEEE Cognitive Communications for Aerospace Applications Workshop (CCAAW)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121136442","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-06-01DOI: 10.1109/CCAAW.2019.8904909
Gandhimathi Velusamy, R. Lent
Cognitive networking applications continuously adapt actions according to observations of the environment and assigned performance goals. In this paper, one such cognitive networking application is evaluated where the aim is to route bundles over parallel links of different characteristics. Several machine learning algorithms may be suitable for the task. This research tested different reinforcement learning methods as potential enablers for this application: Q-Routing, Double Q-Learning, an actor-critic Learning Automata implementing the S-model, and the Cognitive Network Controller (CNC), which uses on a spiking neural network for Q-value prediction. All cases are evaluated under the same experimental conditions. Working with either a stable or time-varying environment with respect to the quality of the links, each routing method was evaluated with an identical number of bundle transmissions generated at a common rate. The measurements indicate that in general, the Cognitive Network Controller (CNC) produces better performance than the other methods followed by the Learning Automata. In the presented tests, the performance of Q-Routing and Double Q-Learning achieved similar performance to a non-learning round-robin approach. It is expect that these results will help to guide and improve the design of this and future cognitive networking applications.
{"title":"Evaluating Reinforcement Learning Methods for Bundle Routing Control","authors":"Gandhimathi Velusamy, R. Lent","doi":"10.1109/CCAAW.2019.8904909","DOIUrl":"https://doi.org/10.1109/CCAAW.2019.8904909","url":null,"abstract":"Cognitive networking applications continuously adapt actions according to observations of the environment and assigned performance goals. In this paper, one such cognitive networking application is evaluated where the aim is to route bundles over parallel links of different characteristics. Several machine learning algorithms may be suitable for the task. This research tested different reinforcement learning methods as potential enablers for this application: Q-Routing, Double Q-Learning, an actor-critic Learning Automata implementing the S-model, and the Cognitive Network Controller (CNC), which uses on a spiking neural network for Q-value prediction. All cases are evaluated under the same experimental conditions. Working with either a stable or time-varying environment with respect to the quality of the links, each routing method was evaluated with an identical number of bundle transmissions generated at a common rate. The measurements indicate that in general, the Cognitive Network Controller (CNC) produces better performance than the other methods followed by the Learning Automata. In the presented tests, the performance of Q-Routing and Double Q-Learning achieved similar performance to a non-learning round-robin approach. It is expect that these results will help to guide and improve the design of this and future cognitive networking applications.","PeriodicalId":196580,"journal":{"name":"2019 IEEE Cognitive Communications for Aerospace Applications Workshop (CCAAW)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128738702","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-06-01DOI: 10.1109/CCAAW.2019.8904892
A. Anderson, Steven R. Young
Wireless communications plays a pivotal role in multiple complex domains such as tactical networks or space communications. Traditional physical (PHY) layer protocols for digital communications contain chains of signal processing blocks that have been mathematically optimized to transmit information bits efficiently over noisy channels. Unfortunately, the ongoing advancement of hardware and software design, and algorithm development, makes it difficult for some domains to keep up with the constant change in modern communication systems. It has been shown previously that combining deep learning with digital modulation (deepmod) allows a system to learn communications on its own rather than requiring human-invented protocols. This is particularly attractive to space communications where updating PHY layer technologies may be prohibitively complex or expensive. A link using deepmod is able to learn both waveform synthesis (transmit) and analysis (receive) that is self-taught. When deepmod is first initiated it has no knowledge of the channel medium but quickly learns to communicate by synthesizing waveforms that can be successfully decoded at the other end of the link. This is accomplished by a custom deep neural network especially suited for this particular task of learning. In this current work, we show that deepmod learns in both traditional point-to-point channels as well as the more abstract multi-hop amplify-and-forward relay channel. In the experimental results, even though no direct link between transmitter and receiver exists, deepmod-enabled nodes still create latent information bearing waveforms that can be used for communications.
{"title":"Self-Taught Waveform Synthesis and Analysis in the Amplify-and-Forward Relay Channel","authors":"A. Anderson, Steven R. Young","doi":"10.1109/CCAAW.2019.8904892","DOIUrl":"https://doi.org/10.1109/CCAAW.2019.8904892","url":null,"abstract":"Wireless communications plays a pivotal role in multiple complex domains such as tactical networks or space communications. Traditional physical (PHY) layer protocols for digital communications contain chains of signal processing blocks that have been mathematically optimized to transmit information bits efficiently over noisy channels. Unfortunately, the ongoing advancement of hardware and software design, and algorithm development, makes it difficult for some domains to keep up with the constant change in modern communication systems. It has been shown previously that combining deep learning with digital modulation (deepmod) allows a system to learn communications on its own rather than requiring human-invented protocols. This is particularly attractive to space communications where updating PHY layer technologies may be prohibitively complex or expensive. A link using deepmod is able to learn both waveform synthesis (transmit) and analysis (receive) that is self-taught. When deepmod is first initiated it has no knowledge of the channel medium but quickly learns to communicate by synthesizing waveforms that can be successfully decoded at the other end of the link. This is accomplished by a custom deep neural network especially suited for this particular task of learning. In this current work, we show that deepmod learns in both traditional point-to-point channels as well as the more abstract multi-hop amplify-and-forward relay channel. In the experimental results, even though no direct link between transmitter and receiver exists, deepmod-enabled nodes still create latent information bearing waveforms that can be used for communications.","PeriodicalId":196580,"journal":{"name":"2019 IEEE Cognitive Communications for Aerospace Applications Workshop (CCAAW)","volume":"117 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123068838","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-06-01DOI: 10.1109/CCAAW.2019.8904889
Yingying Wang, Xinyao Tang, G. Mendis, Jin Wei-Kocsis, A. Madanayake, S. Mandal
Cognitive radios (CRs) based on reconfigurable radio frequency (RF) electronics are a key requirement for implementing next-generation dynamic spectrum access (DSA) algorithms that improve management of the congested sub-6 GHz wireless spectrum. Suitable CRs incorporate adaptive components such as tunable notch filters, matching networks, and dynamic beamformers that can be intelligently tuned by RF scene analysis and situational awareness algorithms. Here we propose CR receivers that use machine learning (ML)-based modulation recognition (MR) algorithms for wideband real-time monitoring of spectral usage. The proposed systems enable detection and avoidance of anomalous signals. They also increase channel capacity and wireless data rates by exploiting white spaces in both licensed and unlicensed bands. An artificial intelligence (AI)-driven single-channel CR receiver prototype operating around 3 GHz has been implemented and tested. Experimental results show i) good over-the-air MR accuracy for several common modulation schemes using a deep belief network (DBN); and ii) autonomous self-optimization of the tunable RF front-end.
{"title":"AI - Driven Self-Optimizing Receivers for Cognitive Radio Networks","authors":"Yingying Wang, Xinyao Tang, G. Mendis, Jin Wei-Kocsis, A. Madanayake, S. Mandal","doi":"10.1109/CCAAW.2019.8904889","DOIUrl":"https://doi.org/10.1109/CCAAW.2019.8904889","url":null,"abstract":"Cognitive radios (CRs) based on reconfigurable radio frequency (RF) electronics are a key requirement for implementing next-generation dynamic spectrum access (DSA) algorithms that improve management of the congested sub-6 GHz wireless spectrum. Suitable CRs incorporate adaptive components such as tunable notch filters, matching networks, and dynamic beamformers that can be intelligently tuned by RF scene analysis and situational awareness algorithms. Here we propose CR receivers that use machine learning (ML)-based modulation recognition (MR) algorithms for wideband real-time monitoring of spectral usage. The proposed systems enable detection and avoidance of anomalous signals. They also increase channel capacity and wireless data rates by exploiting white spaces in both licensed and unlicensed bands. An artificial intelligence (AI)-driven single-channel CR receiver prototype operating around 3 GHz has been implemented and tested. Experimental results show i) good over-the-air MR accuracy for several common modulation schemes using a deep belief network (DBN); and ii) autonomous self-optimization of the tunable RF front-end.","PeriodicalId":196580,"journal":{"name":"2019 IEEE Cognitive Communications for Aerospace Applications Workshop (CCAAW)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131031346","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-06-01DOI: 10.1109/CCAAW.2019.8904911
E. Knoblock, H. Bahrami
Spiking neural networks (SNNs) operating on neuromorphic hardware can enable cognitive functionality with relatively low power consumption as compared to other artificial neural network implementations, making it ideally suited for resource-constrained space platforms such as CubeSats. The objective of this study is to investigate the implementation of a modulation recognition capability using SNNs, which may eventually be applied to neuromorphic hardware for implementation. This preliminary analysis uses a software simulation approach with an unsupervised learning algorithm based on spike-timing-dependent plasticity for classification of digital modulation constellation patterns. This modulation recognition capability can provide enhanced situational awareness for a space platform and facilitate additional high-level cognitive functionality that can be investigated in future studies.
{"title":"Investigation of Spiking Neural Networks for Modulation Recognition using Spike-Timing-Dependent Plasticity","authors":"E. Knoblock, H. Bahrami","doi":"10.1109/CCAAW.2019.8904911","DOIUrl":"https://doi.org/10.1109/CCAAW.2019.8904911","url":null,"abstract":"Spiking neural networks (SNNs) operating on neuromorphic hardware can enable cognitive functionality with relatively low power consumption as compared to other artificial neural network implementations, making it ideally suited for resource-constrained space platforms such as CubeSats. The objective of this study is to investigate the implementation of a modulation recognition capability using SNNs, which may eventually be applied to neuromorphic hardware for implementation. This preliminary analysis uses a software simulation approach with an unsupervised learning algorithm based on spike-timing-dependent plasticity for classification of digital modulation constellation patterns. This modulation recognition capability can provide enhanced situational awareness for a space platform and facilitate additional high-level cognitive functionality that can be investigated in future studies.","PeriodicalId":196580,"journal":{"name":"2019 IEEE Cognitive Communications for Aerospace Applications Workshop (CCAAW)","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126235586","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-06-01DOI: 10.1109/CCAAW.2019.8904882
Susann Pätschke, S. Klinkner, L. Kramer
The volume of data generated by Earth observation satellites has drastically increased in the last years. Consequently, more RF bandwidth is required to download data to ground, and migration to higher frequency is an option. However, the limited resources on small satellite platforms regarding volume, mass and DC power limit the enhancement of bandwidth. The research reported here focusses on the enhancement of integrity, re-configurability and miniaturization. Furthermore, cost-efficient solutions are studied for implementation of high bandwidth downlinks on small satellites. Clearly, a bandwidth-efficient implementation and a migration from the ham radio S-band to the corresponding X-band will increase the download capacity. Low-cost ground receivers, which support the telecommunication standard DVB-S2, are available. DVB-S2 can be adapted for small satellite applications by implementing the Consultative Committee for Space Data System (CCSDS) standard above DVB-S2 in the protocol stack. DVB-S2 supports constant, variable as well as adaptive coding and modulation. Thus, the modulation and coding scheme can change on a frame-by-frame basis depending on channel conditions. In this paper, we describe the development of an own-developed low-cost flexible radio platform by using commercial-off-the-shelf components. An FPGA-based architecture for implementing CCSDS above DVB-S2 protocol stack is presented. This design is capable to compensate for changes in the link conditions, increasing download capacity by 66% for variable and by 130% for adaptive coding and modulation. This increase is crucial for ground terminals located in regions with high rain loss such as South-Asia, or for satellite constellations in low-earth orbit where each satellite has a limited ground station contact window.
{"title":"Development of a compact and flexible software-defined radio transmitter for small satellite applications","authors":"Susann Pätschke, S. Klinkner, L. Kramer","doi":"10.1109/CCAAW.2019.8904882","DOIUrl":"https://doi.org/10.1109/CCAAW.2019.8904882","url":null,"abstract":"The volume of data generated by Earth observation satellites has drastically increased in the last years. Consequently, more RF bandwidth is required to download data to ground, and migration to higher frequency is an option. However, the limited resources on small satellite platforms regarding volume, mass and DC power limit the enhancement of bandwidth. The research reported here focusses on the enhancement of integrity, re-configurability and miniaturization. Furthermore, cost-efficient solutions are studied for implementation of high bandwidth downlinks on small satellites. Clearly, a bandwidth-efficient implementation and a migration from the ham radio S-band to the corresponding X-band will increase the download capacity. Low-cost ground receivers, which support the telecommunication standard DVB-S2, are available. DVB-S2 can be adapted for small satellite applications by implementing the Consultative Committee for Space Data System (CCSDS) standard above DVB-S2 in the protocol stack. DVB-S2 supports constant, variable as well as adaptive coding and modulation. Thus, the modulation and coding scheme can change on a frame-by-frame basis depending on channel conditions. In this paper, we describe the development of an own-developed low-cost flexible radio platform by using commercial-off-the-shelf components. An FPGA-based architecture for implementing CCSDS above DVB-S2 protocol stack is presented. This design is capable to compensate for changes in the link conditions, increasing download capacity by 66% for variable and by 130% for adaptive coding and modulation. This increase is crucial for ground terminals located in regions with high rain loss such as South-Asia, or for satellite constellations in low-earth orbit where each satellite has a limited ground station contact window.","PeriodicalId":196580,"journal":{"name":"2019 IEEE Cognitive Communications for Aerospace Applications Workshop (CCAAW)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116899445","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-06-01DOI: 10.1109/CCAAW.2019.8904891
C. Yakopcic, Jacob Freeman, T. Taha, Scott Douglass, Qing Wu
Cognitive agents are typically utilized in autonomous systems for automated decision making. These systems interact at real time with their environment and are generally heavily power constrained. Thus, there is a strong need for a real time agent running on a low power platform. The agent examined is the Cognitively Enhanced Complex Event Processing (CECEP) architecture. This is an autonomous decision support tool that reasons like humans and enables enhanced agent-based decision-making. It has applications in a large variety of domains including autonomous systems, operations research, intelligence analysis, and data mining. One of the most time consuming and key components of CECEP is the mining of knowledge from a repository described as a Cognitive Domain Ontology (CDO). Given the number of possible solutions in the problems tasked to CDOs, determining the optimal solutions can be very time consuming. In this work we show how problems that are often solved using CDOs can be carried out using spiking neurons. Furthermore, this work discusses using the Intel Loihi manycore spiking neural network processor to solve CDOs using a technique inspired by a confabulation network. This work demonstrates the feasibility of implementing CDOs on embedded, low power, neuromorphic spiking hardware.
{"title":"Cognitive Domain Ontologies Based on Loihi Spiking Neurons Implemented Using a Confabulation Inspired Network","authors":"C. Yakopcic, Jacob Freeman, T. Taha, Scott Douglass, Qing Wu","doi":"10.1109/CCAAW.2019.8904891","DOIUrl":"https://doi.org/10.1109/CCAAW.2019.8904891","url":null,"abstract":"Cognitive agents are typically utilized in autonomous systems for automated decision making. These systems interact at real time with their environment and are generally heavily power constrained. Thus, there is a strong need for a real time agent running on a low power platform. The agent examined is the Cognitively Enhanced Complex Event Processing (CECEP) architecture. This is an autonomous decision support tool that reasons like humans and enables enhanced agent-based decision-making. It has applications in a large variety of domains including autonomous systems, operations research, intelligence analysis, and data mining. One of the most time consuming and key components of CECEP is the mining of knowledge from a repository described as a Cognitive Domain Ontology (CDO). Given the number of possible solutions in the problems tasked to CDOs, determining the optimal solutions can be very time consuming. In this work we show how problems that are often solved using CDOs can be carried out using spiking neurons. Furthermore, this work discusses using the Intel Loihi manycore spiking neural network processor to solve CDOs using a technique inspired by a confabulation network. This work demonstrates the feasibility of implementing CDOs on embedded, low power, neuromorphic spiking hardware.","PeriodicalId":196580,"journal":{"name":"2019 IEEE Cognitive Communications for Aerospace Applications Workshop (CCAAW)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133493886","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-06-01DOI: 10.1109/CCAAW.2019.8904900
Ryan Linnabary, A. O'Brien, G. Smith, C. Ball, J. Johnson
Distributed satellite constellations utilizing networks of small satellites will be a key enabler of new observing strategies in the next generation of NASA missions. Small satellite instruments are becoming more capable, but are still resource constrained (i.e. power, data, scanning systems, etc.) in many situations. On a system scale, the primary purpose of collaborative communication among small satellites is to achieve system-level adaptivity. Collaborative communications however may also dramatically increase the complexity of the control algorithms for small satellite communication networks. Application of cognitive communication methods is one promising method to address this problem. In this paper, we discuss our recent investigations into how machine learning (ML) algorithms can be utilized in the high-level decision making of a communication system in a distributed satellite mission. We performed simulation studies to explore how the perception-action cycle could be applied to a collaborative small-satellite networks. To support this, we are using a recently developed open-source C++ library for the simulation of autonomous and collaborative networks of adaptive sensors.
{"title":"Using Cognitive Communications to Increase the Operational Value of Collaborative Networks of Satellites","authors":"Ryan Linnabary, A. O'Brien, G. Smith, C. Ball, J. Johnson","doi":"10.1109/CCAAW.2019.8904900","DOIUrl":"https://doi.org/10.1109/CCAAW.2019.8904900","url":null,"abstract":"Distributed satellite constellations utilizing networks of small satellites will be a key enabler of new observing strategies in the next generation of NASA missions. Small satellite instruments are becoming more capable, but are still resource constrained (i.e. power, data, scanning systems, etc.) in many situations. On a system scale, the primary purpose of collaborative communication among small satellites is to achieve system-level adaptivity. Collaborative communications however may also dramatically increase the complexity of the control algorithms for small satellite communication networks. Application of cognitive communication methods is one promising method to address this problem. In this paper, we discuss our recent investigations into how machine learning (ML) algorithms can be utilized in the high-level decision making of a communication system in a distributed satellite mission. We performed simulation studies to explore how the perception-action cycle could be applied to a collaborative small-satellite networks. To support this, we are using a recently developed open-source C++ library for the simulation of autonomous and collaborative networks of adaptive sensors.","PeriodicalId":196580,"journal":{"name":"2019 IEEE Cognitive Communications for Aerospace Applications Workshop (CCAAW)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123595445","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-06-01DOI: 10.1109/CCAAW.2019.8904899
C. Yakopcic, Nayim Rahman, Tanvir Atahary, Md. Zahangir Alom, T. Taha, Alex Beigh, Scott Douglass
Asset allocation is a compute intensive combinatorial optimization problem commonly tasked to autonomous decision making systems. However, cognitive agents interact in real time with their environment and are generally heavily power constrained. Thus, there is strong need for a real time asset allocation agent running on a low power computing platform to ensure efficiency and portability. As an alternative to traditional techniques, work presented in this paper describes how spiking neuron algorithms can be used to carry out asset allocation. We show that a significant reduction in computation time can be gained if the user is willing to accept a near optimal solution using our spiking neuron approach. As of late, specialized neuromorphic spiking processors have demonstrated a dramatic reduction in power consumption relative to traditional processing techniques for certain applications. Improved efficiencies are primarily due to unique algorithmic processing that produces a reduction in data movement and an increase in parallel computation. In this work, we use the TrueNorth spiking neural network processor to implement our asset allocation algorithm. With an operating power of approximately 50 mW, we show the feasibility of performing portable low-power task allocation on a spiking neuromorphic processor.
{"title":"Spiking Neural Network for Asset Allocation Implemented Using the TrueNorth System","authors":"C. Yakopcic, Nayim Rahman, Tanvir Atahary, Md. Zahangir Alom, T. Taha, Alex Beigh, Scott Douglass","doi":"10.1109/CCAAW.2019.8904899","DOIUrl":"https://doi.org/10.1109/CCAAW.2019.8904899","url":null,"abstract":"Asset allocation is a compute intensive combinatorial optimization problem commonly tasked to autonomous decision making systems. However, cognitive agents interact in real time with their environment and are generally heavily power constrained. Thus, there is strong need for a real time asset allocation agent running on a low power computing platform to ensure efficiency and portability. As an alternative to traditional techniques, work presented in this paper describes how spiking neuron algorithms can be used to carry out asset allocation. We show that a significant reduction in computation time can be gained if the user is willing to accept a near optimal solution using our spiking neuron approach. As of late, specialized neuromorphic spiking processors have demonstrated a dramatic reduction in power consumption relative to traditional processing techniques for certain applications. Improved efficiencies are primarily due to unique algorithmic processing that produces a reduction in data movement and an increase in parallel computation. In this work, we use the TrueNorth spiking neural network processor to implement our asset allocation algorithm. With an operating power of approximately 50 mW, we show the feasibility of performing portable low-power task allocation on a spiking neuromorphic processor.","PeriodicalId":196580,"journal":{"name":"2019 IEEE Cognitive Communications for Aerospace Applications Workshop (CCAAW)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124018977","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}