Chenglong Yin, Fei Zhang, Bin Hao, Zijian Fu, Xiaoyu Pang
Computer vision technology is being applied at an unprecedented speed in various fields such as 3D scene reconstruction, object detection and recognition, video content tracking, pose estimation, and motion estimation. To address the issues of low accuracy and high time complexity in traditional image feature point matching, a fast image-matching algorithm based on nonlinear filtering is proposed. By applying nonlinear diffusion filtering to scene images, details and edge information can be effectively extracted. The feature descriptors of the feature points are transformed into binary form, occupying less storage space and thus reducing matching time. The adaptive RANSAC algorithm is utilized to eliminate mismatched feature points, thereby improving matching accuracy. Our experimental results on the Mikolajcyzk image dataset comparing the SIFT algorithm with SURF-, BRISK-, and ORB-improved algorithms based on the SIFT algorithm conclude that the fast image-matching algorithm based on nonlinear filtering reduces matching time by three-quarters, with an overall average accuracy of over 7% higher than other algorithms. These experiments demonstrate that the fast image-matching algorithm based on nonlinear filtering has better robustness and real-time performance.
{"title":"Research on a Fast Image-Matching Algorithm Based on Nonlinear Filtering","authors":"Chenglong Yin, Fei Zhang, Bin Hao, Zijian Fu, Xiaoyu Pang","doi":"10.3390/a17040165","DOIUrl":"https://doi.org/10.3390/a17040165","url":null,"abstract":"Computer vision technology is being applied at an unprecedented speed in various fields such as 3D scene reconstruction, object detection and recognition, video content tracking, pose estimation, and motion estimation. To address the issues of low accuracy and high time complexity in traditional image feature point matching, a fast image-matching algorithm based on nonlinear filtering is proposed. By applying nonlinear diffusion filtering to scene images, details and edge information can be effectively extracted. The feature descriptors of the feature points are transformed into binary form, occupying less storage space and thus reducing matching time. The adaptive RANSAC algorithm is utilized to eliminate mismatched feature points, thereby improving matching accuracy. Our experimental results on the Mikolajcyzk image dataset comparing the SIFT algorithm with SURF-, BRISK-, and ORB-improved algorithms based on the SIFT algorithm conclude that the fast image-matching algorithm based on nonlinear filtering reduces matching time by three-quarters, with an overall average accuracy of over 7% higher than other algorithms. These experiments demonstrate that the fast image-matching algorithm based on nonlinear filtering has better robustness and real-time performance.","PeriodicalId":502609,"journal":{"name":"Algorithms","volume":" 33","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140685025","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this study, we investigate Quantum Long Short-Term Memory and Quantum Gated Recurrent Unit integrated with Variational Quantum Circuits in modeling complex dynamical systems, including the Van der Pol oscillator, coupled oscillators, and the Lorenz system. We implement these advanced quantum machine learning techniques and compare their performance with traditional Long Short-Term Memory and Gated Recurrent Unit models. The results of our study reveal that the quantum-based models deliver superior precision and more stable loss metrics throughout 100 epochs for both the Van der Pol oscillator and coupled harmonic oscillators, and 20 epochs for the Lorenz system. The Quantum Gated Recurrent Unit outperforms competing models, showcasing notable performance metrics. For the Van der Pol oscillator, it reports MAE 0.0902 and RMSE 0.1031 for variable x and MAE 0.1500 and RMSE 0.1943 for y; for coupled oscillators, Oscillator 1 shows MAE 0.2411 and RMSE 0.2701 and Oscillator 2 MAE is 0.0482 and RMSE 0.0602; and for the Lorenz system, the results are MAE 0.4864 and RMSE 0.4971 for x, MAE 0.4723 and RMSE 0.4846 for y, and MAE 0.4555 and RMSE 0.4745 for z. These outcomes mark a significant advancement in the field of quantum machine learning.
在本研究中,我们研究了量子长短期记忆和量子门控递归单元与变分量子电路在复杂动态系统建模中的集成,包括范德波尔振荡器、耦合振荡器和洛伦兹系统。我们实现了这些先进的量子机器学习技术,并将其性能与传统的长短期记忆和门控递归单元模型进行了比较。研究结果表明,基于量子的模型在范德尔波尔振荡器和耦合谐波振荡器的 100 个历时周期以及洛伦兹系统的 20 个历时周期内都能提供更高的精度和更稳定的损耗指标。量子门控循环单元的性能优于同类竞争模型,展示了显著的性能指标。对于范德尔波尔振荡器,它报告的变量 x MAE 为 0.0902,RMSE 为 0.1031,变量 y MAE 为 0.1500,RMSE 为 0.1943;对于耦合振荡器,振荡器 1 显示 MAE 为 0.2411,RMSE 为 0.2701,振荡器 2 MAE 为 0.0482,RMSE 为 0.0602。这些结果标志着量子机器学习领域的重大进展。
{"title":"Quantum Recurrent Neural Networks: Predicting the Dynamics of Oscillatory and Chaotic Systems","authors":"Yuanbo Chen, Abdul Khaliq","doi":"10.3390/a17040163","DOIUrl":"https://doi.org/10.3390/a17040163","url":null,"abstract":"In this study, we investigate Quantum Long Short-Term Memory and Quantum Gated Recurrent Unit integrated with Variational Quantum Circuits in modeling complex dynamical systems, including the Van der Pol oscillator, coupled oscillators, and the Lorenz system. We implement these advanced quantum machine learning techniques and compare their performance with traditional Long Short-Term Memory and Gated Recurrent Unit models. The results of our study reveal that the quantum-based models deliver superior precision and more stable loss metrics throughout 100 epochs for both the Van der Pol oscillator and coupled harmonic oscillators, and 20 epochs for the Lorenz system. The Quantum Gated Recurrent Unit outperforms competing models, showcasing notable performance metrics. For the Van der Pol oscillator, it reports MAE 0.0902 and RMSE 0.1031 for variable x and MAE 0.1500 and RMSE 0.1943 for y; for coupled oscillators, Oscillator 1 shows MAE 0.2411 and RMSE 0.2701 and Oscillator 2 MAE is 0.0482 and RMSE 0.0602; and for the Lorenz system, the results are MAE 0.4864 and RMSE 0.4971 for x, MAE 0.4723 and RMSE 0.4846 for y, and MAE 0.4555 and RMSE 0.4745 for z. These outcomes mark a significant advancement in the field of quantum machine learning.","PeriodicalId":502609,"journal":{"name":"Algorithms","volume":" 42","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140684463","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Enhancing lung cancer diagnosis requires precise early detection methods. This study introduces an automated diagnostic system leveraging computed tomography (CT) scans for early lung cancer identification. The main approach is the integration of three distinct feature analyses: the novel 3D-Local Octal Pattern (LOP) descriptor for texture analysis, the 3D-Convolutional Neural Network (CNN) for extracting deep features, and geometric feature analysis to characterize pulmonary nodules. The 3D-LOP method innovatively captures nodule texture by analyzing the orientation and magnitude of voxel relationships, enabling the distinction of discriminative features. Simultaneously, the 3D-CNN extracts deep features from raw CT scans, providing comprehensive insights into nodule characteristics. Geometric features and assessing nodule shape further augment this analysis, offering a holistic view of potential malignancies. By amalgamating these analyses, our system employs a probability-based linear classifier to deliver a final diagnostic output. Validated on 822 Lung Image Database Consortium (LIDC) cases, the system’s performance was exceptional, with measures of 97.84%, 98.11%, 94.73%, and 0.9912 for accuracy, sensitivity, specificity, and Area Under the ROC Curve (AUC), respectively. These results highlight the system’s potential as a significant advancement in clinical diagnostics, offering a reliable, non-invasive tool for lung cancer detection that promises to improve patient outcomes through early diagnosis.
{"title":"Advancing Pulmonary Nodule Diagnosis by Integrating Engineered and Deep Features Extracted from CT Scans","authors":"Wiem Safta, A. Shaffie","doi":"10.3390/a17040161","DOIUrl":"https://doi.org/10.3390/a17040161","url":null,"abstract":"Enhancing lung cancer diagnosis requires precise early detection methods. This study introduces an automated diagnostic system leveraging computed tomography (CT) scans for early lung cancer identification. The main approach is the integration of three distinct feature analyses: the novel 3D-Local Octal Pattern (LOP) descriptor for texture analysis, the 3D-Convolutional Neural Network (CNN) for extracting deep features, and geometric feature analysis to characterize pulmonary nodules. The 3D-LOP method innovatively captures nodule texture by analyzing the orientation and magnitude of voxel relationships, enabling the distinction of discriminative features. Simultaneously, the 3D-CNN extracts deep features from raw CT scans, providing comprehensive insights into nodule characteristics. Geometric features and assessing nodule shape further augment this analysis, offering a holistic view of potential malignancies. By amalgamating these analyses, our system employs a probability-based linear classifier to deliver a final diagnostic output. Validated on 822 Lung Image Database Consortium (LIDC) cases, the system’s performance was exceptional, with measures of 97.84%, 98.11%, 94.73%, and 0.9912 for accuracy, sensitivity, specificity, and Area Under the ROC Curve (AUC), respectively. These results highlight the system’s potential as a significant advancement in clinical diagnostics, offering a reliable, non-invasive tool for lung cancer detection that promises to improve patient outcomes through early diagnosis.","PeriodicalId":502609,"journal":{"name":"Algorithms","volume":" 40","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140687792","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper explores the concept of proportional lumpability as an extension of the original definition of lumpability, addressing the challenges posed by the state space explosion problem in computing performance indices for large stochastic models. Lumpability traditionally relies on state aggregation techniques and is applicable to Markov chains demonstrating structural regularity. Proportional lumpability extends this idea, proposing that the transition rates of a Markov chain can be modified by certain factors, resulting in a lumpable new Markov chain. This concept facilitates the derivation of precise performance indices for the original process. This paper establishes the well-defined nature of the problem of computing the coarsest proportional lumpability that refines a given initial partition, ensuring a unique solution exists. Additionally, a polynomial time algorithm is introduced to solve this problem, offering valuable insights into both the concept of proportional lumpability and the broader realm of partition refinement techniques. The effectiveness of proportional lumpability is demonstrated through a case study that consists of designing a model to investigate selfish mining behaviors on public blockchains. This research contributes to a better understanding of efficient approaches for handling large stochastic models and highlights the practical applicability of proportional lumpability in deriving exact performance indices.
{"title":"Efficient Algorithm for Proportional Lumpability and Its Application to Selfish Mining in Public Blockchains","authors":"Carla Piazza, Sabina Rossi, Daria Smuseva","doi":"10.3390/a17040159","DOIUrl":"https://doi.org/10.3390/a17040159","url":null,"abstract":"This paper explores the concept of proportional lumpability as an extension of the original definition of lumpability, addressing the challenges posed by the state space explosion problem in computing performance indices for large stochastic models. Lumpability traditionally relies on state aggregation techniques and is applicable to Markov chains demonstrating structural regularity. Proportional lumpability extends this idea, proposing that the transition rates of a Markov chain can be modified by certain factors, resulting in a lumpable new Markov chain. This concept facilitates the derivation of precise performance indices for the original process. This paper establishes the well-defined nature of the problem of computing the coarsest proportional lumpability that refines a given initial partition, ensuring a unique solution exists. Additionally, a polynomial time algorithm is introduced to solve this problem, offering valuable insights into both the concept of proportional lumpability and the broader realm of partition refinement techniques. The effectiveness of proportional lumpability is demonstrated through a case study that consists of designing a model to investigate selfish mining behaviors on public blockchains. This research contributes to a better understanding of efficient approaches for handling large stochastic models and highlights the practical applicability of proportional lumpability in deriving exact performance indices.","PeriodicalId":502609,"journal":{"name":"Algorithms","volume":"9 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140700776","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Analyzing point clouds with neural networks is a current research hotspot. In order to analyze the 3D geometric features of point clouds, most neural networks improve the network performance by adding local geometric operators and trainable parameters. However, deep learning usually requires a large amount of computational resources for training and inference, which poses challenges to hardware devices and energy consumption. Therefore, some researches have started to try to use a nonparametric approach to extract features. Point-NN combines nonparametric modules to build a nonparametric network for 3D point cloud analysis, and the nonparametric components include operations such as trigonometric embedding, farthest point sampling (FPS), k-nearest neighbor (k-NN), and pooling. However, Point-NN has some blindness in feature embedding using the trigonometric function during feature extraction. To eliminate this blindness as much as possible, we utilize a nonparametric energy function-based attention mechanism (ResSimAM). The embedded features are enhanced by calculating the energy of the features by the energy function, and then the ResSimAM is used to enhance the weights of the embedded features by the energy to enhance the features without adding any parameters to the original network; Point-NN needs to compute the similarity between each feature at the naive feature similarity matching stage; however, the magnitude difference of the features in vector space during the feature extraction stage may affect the final matching result. We use the Squash operation to squeeze the features. This nonlinear operation can make the features squeeze to a certain range without changing the original direction in the vector space, thus eliminating the effect of feature magnitude, and we can ultimately better complete the naive feature matching in the vector space. We inserted these modules into the network and build a nonparametric network, Point-Sim, which performs well in 3D classification tasks. Based on this, we extend the lightweight neural network Point-SimP by adding some trainable parameters for the point cloud classification task, which requires only 0.8 M parameters for high performance analysis. Experimental results demonstrate the effectiveness of our proposed algorithm in the point cloud shape classification task. The corresponding results on ModelNet40 and ScanObjectNN are 83.9% and 66.3% for 0 M parameters—without any training—and 93.3% and 86.6% for 0.8 M parameters. The Point-SimP reaches a test speed of 962 samples per second on the ModelNet40 dataset. The experimental results show that our proposed method effectively improves the performance on point cloud classification networks.
{"title":"Point-Sim: A Lightweight Network for 3D Point Cloud Classification","authors":"Jiachen Guo, Wenjie Luo","doi":"10.3390/a17040158","DOIUrl":"https://doi.org/10.3390/a17040158","url":null,"abstract":"Analyzing point clouds with neural networks is a current research hotspot. In order to analyze the 3D geometric features of point clouds, most neural networks improve the network performance by adding local geometric operators and trainable parameters. However, deep learning usually requires a large amount of computational resources for training and inference, which poses challenges to hardware devices and energy consumption. Therefore, some researches have started to try to use a nonparametric approach to extract features. Point-NN combines nonparametric modules to build a nonparametric network for 3D point cloud analysis, and the nonparametric components include operations such as trigonometric embedding, farthest point sampling (FPS), k-nearest neighbor (k-NN), and pooling. However, Point-NN has some blindness in feature embedding using the trigonometric function during feature extraction. To eliminate this blindness as much as possible, we utilize a nonparametric energy function-based attention mechanism (ResSimAM). The embedded features are enhanced by calculating the energy of the features by the energy function, and then the ResSimAM is used to enhance the weights of the embedded features by the energy to enhance the features without adding any parameters to the original network; Point-NN needs to compute the similarity between each feature at the naive feature similarity matching stage; however, the magnitude difference of the features in vector space during the feature extraction stage may affect the final matching result. We use the Squash operation to squeeze the features. This nonlinear operation can make the features squeeze to a certain range without changing the original direction in the vector space, thus eliminating the effect of feature magnitude, and we can ultimately better complete the naive feature matching in the vector space. We inserted these modules into the network and build a nonparametric network, Point-Sim, which performs well in 3D classification tasks. Based on this, we extend the lightweight neural network Point-SimP by adding some trainable parameters for the point cloud classification task, which requires only 0.8 M parameters for high performance analysis. Experimental results demonstrate the effectiveness of our proposed algorithm in the point cloud shape classification task. The corresponding results on ModelNet40 and ScanObjectNN are 83.9% and 66.3% for 0 M parameters—without any training—and 93.3% and 86.6% for 0.8 M parameters. The Point-SimP reaches a test speed of 962 samples per second on the ModelNet40 dataset. The experimental results show that our proposed method effectively improves the performance on point cloud classification networks.","PeriodicalId":502609,"journal":{"name":"Algorithms","volume":"74 3","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140702486","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Al-Betar, A. Abasi, Zaid Abdi Alkareem Alyasseri, Salam Fraihat, Raghad Falih Mohammed
The pressing need for sustainable development solutions necessitates innovative data-driven tools. Machine learning (ML) offers significant potential, but faces challenges in centralized approaches, particularly concerning data privacy and resource constraints in geographically dispersed settings. Federated learning (FL) emerges as a transformative paradigm for sustainable development by decentralizing ML training to edge devices. However, communication bottlenecks hinder its scalability and sustainability. This paper introduces an innovative FL framework that enhances communication efficiency. The proposed framework addresses the communication bottleneck by harnessing the power of the Lemurs optimizer (LO), a nature-inspired metaheuristic algorithm. Inspired by the cooperative foraging behavior of lemurs, the LO strategically selects the most relevant model updates for communication, significantly reducing communication overhead. The framework was rigorously evaluated on CIFAR-10, MNIST, rice leaf disease, and waste recycling plant datasets representing various areas of sustainable development. Experimental results demonstrate that the proposed framework reduces communication overhead by over 15% on average compared to baseline FL approaches, while maintaining high model accuracy. This breakthrough extends the applicability of FL to resource-constrained environments, paving the way for more scalable and sustainable solutions for real-world initiatives.
{"title":"A Communication-Efficient Federated Learning Framework for Sustainable Development Using Lemurs Optimizer","authors":"M. Al-Betar, A. Abasi, Zaid Abdi Alkareem Alyasseri, Salam Fraihat, Raghad Falih Mohammed","doi":"10.3390/a17040160","DOIUrl":"https://doi.org/10.3390/a17040160","url":null,"abstract":"The pressing need for sustainable development solutions necessitates innovative data-driven tools. Machine learning (ML) offers significant potential, but faces challenges in centralized approaches, particularly concerning data privacy and resource constraints in geographically dispersed settings. Federated learning (FL) emerges as a transformative paradigm for sustainable development by decentralizing ML training to edge devices. However, communication bottlenecks hinder its scalability and sustainability. This paper introduces an innovative FL framework that enhances communication efficiency. The proposed framework addresses the communication bottleneck by harnessing the power of the Lemurs optimizer (LO), a nature-inspired metaheuristic algorithm. Inspired by the cooperative foraging behavior of lemurs, the LO strategically selects the most relevant model updates for communication, significantly reducing communication overhead. The framework was rigorously evaluated on CIFAR-10, MNIST, rice leaf disease, and waste recycling plant datasets representing various areas of sustainable development. Experimental results demonstrate that the proposed framework reduces communication overhead by over 15% on average compared to baseline FL approaches, while maintaining high model accuracy. This breakthrough extends the applicability of FL to resource-constrained environments, paving the way for more scalable and sustainable solutions for real-world initiatives.","PeriodicalId":502609,"journal":{"name":"Algorithms","volume":"56 4","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140702694","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The systematic generation of prime numbers has been almost ignored since the 1990s, when most of the IT research resources related to prime numbers migrated to studies on the use of very large primes for cryptography, and little effort was made to further the knowledge regarding techniques like sieving. At present, sieving techniques are mostly used for didactic purposes, and no real advances seem to be made in this domain. This systematic review analyzes the theoretical advances in sieving that have occurred up to the present. The research followed the PRISMA 2020 guidelines and was conducted using three established databases: Web of Science, IEEE Xplore and Scopus. Our methodical review aims to provide an extensive overview of the progress in prime sieving—unfortunately, no significant advancements in this field were identified in the last 20 years.
{"title":"Prime Number Sieving—A Systematic Review with Performance Analysis","authors":"Mircea Ghidarcea, Decebal Popescu","doi":"10.3390/a17040157","DOIUrl":"https://doi.org/10.3390/a17040157","url":null,"abstract":"The systematic generation of prime numbers has been almost ignored since the 1990s, when most of the IT research resources related to prime numbers migrated to studies on the use of very large primes for cryptography, and little effort was made to further the knowledge regarding techniques like sieving. At present, sieving techniques are mostly used for didactic purposes, and no real advances seem to be made in this domain. This systematic review analyzes the theoretical advances in sieving that have occurred up to the present. The research followed the PRISMA 2020 guidelines and was conducted using three established databases: Web of Science, IEEE Xplore and Scopus. Our methodical review aims to provide an extensive overview of the progress in prime sieving—unfortunately, no significant advancements in this field were identified in the last 20 years.","PeriodicalId":502609,"journal":{"name":"Algorithms","volume":"7 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140706387","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kyle McMillan, R. So, Camilo Libedinsky, Kai Keng Ang, Brian Premchand
Background. Brain–machine interfaces (BMIs) offer users the ability to directly communicate with digital devices through neural signals decoded with machine learning (ML)-based algorithms. Spiking Neural Networks (SNNs) are a type of Artificial Neural Network (ANN) that operate on neural spikes instead of continuous scalar outputs. Compared to traditional ANNs, SNNs perform fewer computations, use less memory, and mimic biological neurons better. However, SNNs only retain information for short durations, limiting their ability to capture long-term dependencies in time-variant data. Here, we propose a novel spike-weighted SNN with spiking long short-term memory (swSNN-SLSTM) for a regression problem. Spike-weighting captures neuronal firing rate instead of membrane potential, and the SLSTM layer captures long-term dependencies. Methods. We compared the performance of various ML algorithms during decoding directional movements, using a dataset of microelectrode recordings from a macaque during a directional joystick task, and also an open-source dataset. We thus quantified how swSNN-SLSTM performed compared to existing ML models: an unscented Kalman filter, LSTM-based ANN, and membrane-based SNN techniques. Result. The proposed swSNN-SLSTM outperforms both the unscented Kalman filter, the LSTM-based ANN, and the membrane based SNN technique. This shows that incorporating SLSTM can better capture long-term dependencies within neural data. Also, our proposed swSNN-SLSTM algorithm shows promise in reducing power consumption and lowering heat dissipation in implanted BMIs.
背景。脑机接口(BMI)通过基于机器学习(ML)算法解码的神经信号,为用户提供与数字设备直接通信的能力。尖峰神经网络(SNN)是人工神经网络(ANN)的一种,它通过神经尖峰而非连续标量输出来运行。与传统的人工神经网络相比,SNN 的计算量更少,使用的内存更少,而且能更好地模拟生物神经元。然而,SNN 只在短时间内保留信息,限制了其捕捉时变数据中长期依赖关系的能力。在这里,我们针对回归问题提出了一种具有尖峰长短期记忆的新型尖峰加权 SNN(swSNN-SLSTM)。尖峰加权捕捉神经元发射率而不是膜电位,SLSTM 层捕捉长期依赖性。方法我们使用猕猴在定向操纵杆任务中的微电极记录数据集和一个开源数据集,比较了各种 ML 算法在解码定向运动时的性能。因此,我们量化了 swSNN-SLSTM 与现有 ML 模型(无香味卡尔曼滤波器、基于 LSTM 的 ANN 和基于膜的 SNN 技术)相比的表现。结果所提出的 swSNN-SLSTM 优于无香味卡尔曼滤波器、基于 LSTM 的 ANN 和基于膜的 SNN 技术。这表明,SLSTM 可以更好地捕捉神经数据中的长期依赖关系。此外,我们提出的 swSNN-SLSTM 算法有望降低植入式 BMI 的功耗和散热。
{"title":"Spike-Weighted Spiking Neural Network with Spiking Long Short-Term Memory: A Biomimetic Approach to Decoding Brain Signals","authors":"Kyle McMillan, R. So, Camilo Libedinsky, Kai Keng Ang, Brian Premchand","doi":"10.3390/a17040156","DOIUrl":"https://doi.org/10.3390/a17040156","url":null,"abstract":"Background. Brain–machine interfaces (BMIs) offer users the ability to directly communicate with digital devices through neural signals decoded with machine learning (ML)-based algorithms. Spiking Neural Networks (SNNs) are a type of Artificial Neural Network (ANN) that operate on neural spikes instead of continuous scalar outputs. Compared to traditional ANNs, SNNs perform fewer computations, use less memory, and mimic biological neurons better. However, SNNs only retain information for short durations, limiting their ability to capture long-term dependencies in time-variant data. Here, we propose a novel spike-weighted SNN with spiking long short-term memory (swSNN-SLSTM) for a regression problem. Spike-weighting captures neuronal firing rate instead of membrane potential, and the SLSTM layer captures long-term dependencies. Methods. We compared the performance of various ML algorithms during decoding directional movements, using a dataset of microelectrode recordings from a macaque during a directional joystick task, and also an open-source dataset. We thus quantified how swSNN-SLSTM performed compared to existing ML models: an unscented Kalman filter, LSTM-based ANN, and membrane-based SNN techniques. Result. The proposed swSNN-SLSTM outperforms both the unscented Kalman filter, the LSTM-based ANN, and the membrane based SNN technique. This shows that incorporating SLSTM can better capture long-term dependencies within neural data. Also, our proposed swSNN-SLSTM algorithm shows promise in reducing power consumption and lowering heat dissipation in implanted BMIs.","PeriodicalId":502609,"journal":{"name":"Algorithms","volume":"19 36","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140711703","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shahad Alahmed, Qutaiba Alasad, J. Yuan, Mohammed Alawad
The rapid expansion and pervasive reach of the internet in recent years have raised concerns about evolving and adaptable online threats, particularly with the extensive integration of Machine Learning (ML) systems into our daily routines. These systems are increasingly becoming targets of malicious attacks that seek to distort their functionality through the concept of poisoning. Such attacks aim to warp the intended operations of these services, deviating them from their true purpose. Poisoning renders systems susceptible to unauthorized access, enabling illicit users to masquerade as legitimate ones, compromising the integrity of smart technology-based systems like Network Intrusion Detection Systems (NIDSs). Therefore, it is necessary to continue working on studying the resilience of deep learning network systems while there are poisoning attacks, specifically interfering with the integrity of data conveyed over networks. This paper explores the resilience of deep learning (DL)—based NIDSs against untethered white-box attacks. More specifically, it introduces a designed poisoning attack technique geared especially for deep learning by adding various amounts of altered instances into training datasets at diverse rates and then investigating the attack’s influence on model performance. We observe that increasing injection rates (from 1% to 50%) and random amplified distribution have slightly affected the overall performance of the system, which is represented by accuracy (0.93) at the end of the experiments. However, the rest of the results related to the other measures, such as PPV (0.082), FPR (0.29), and MSE (0.67), indicate that the data manipulation poisoning attacks impact the deep learning model. These findings shed light on the vulnerability of DL-based NIDS under poisoning attacks, emphasizing the significance of securing such systems against these sophisticated threats, for which defense techniques should be considered. Our analysis, supported by experimental results, shows that the generated poisoned data have significantly impacted the model performance and are hard to be detected.
{"title":"Impacting Robustness in Deep Learning-Based NIDS through Poisoning Attacks","authors":"Shahad Alahmed, Qutaiba Alasad, J. Yuan, Mohammed Alawad","doi":"10.3390/a17040155","DOIUrl":"https://doi.org/10.3390/a17040155","url":null,"abstract":"The rapid expansion and pervasive reach of the internet in recent years have raised concerns about evolving and adaptable online threats, particularly with the extensive integration of Machine Learning (ML) systems into our daily routines. These systems are increasingly becoming targets of malicious attacks that seek to distort their functionality through the concept of poisoning. Such attacks aim to warp the intended operations of these services, deviating them from their true purpose. Poisoning renders systems susceptible to unauthorized access, enabling illicit users to masquerade as legitimate ones, compromising the integrity of smart technology-based systems like Network Intrusion Detection Systems (NIDSs). Therefore, it is necessary to continue working on studying the resilience of deep learning network systems while there are poisoning attacks, specifically interfering with the integrity of data conveyed over networks. This paper explores the resilience of deep learning (DL)—based NIDSs against untethered white-box attacks. More specifically, it introduces a designed poisoning attack technique geared especially for deep learning by adding various amounts of altered instances into training datasets at diverse rates and then investigating the attack’s influence on model performance. We observe that increasing injection rates (from 1% to 50%) and random amplified distribution have slightly affected the overall performance of the system, which is represented by accuracy (0.93) at the end of the experiments. However, the rest of the results related to the other measures, such as PPV (0.082), FPR (0.29), and MSE (0.67), indicate that the data manipulation poisoning attacks impact the deep learning model. These findings shed light on the vulnerability of DL-based NIDS under poisoning attacks, emphasizing the significance of securing such systems against these sophisticated threats, for which defense techniques should be considered. Our analysis, supported by experimental results, shows that the generated poisoned data have significantly impacted the model performance and are hard to be detected.","PeriodicalId":502609,"journal":{"name":"Algorithms","volume":"24 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140713279","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ioannis K. Argyros, S. George, Samundra Regmi, Christopher I. Argyros
Iterative algorithms requiring the computationally expensive in general inversion of linear operators are difficult to implement. This is the reason why hybrid Newton-like algorithms without inverses are developed in this paper to solve Banach space-valued nonlinear equations. The inverses of the linear operator are exchanged by a finite sum of fixed linear operators. Two types of convergence analysis are presented for these algorithms: the semilocal and the local. The Fréchet derivative of the operator on the equation is controlled by a majorant function. The semi-local analysis also relies on majorizing sequences. The celebrated contraction mapping principle is utilized to study the convergence of the Krasnoselskij-like algorithm. The numerical experimentation demonstrates that the new algorithms are essentially as effective but less expensive to implement. Although the new approach is demonstrated for Newton-like algorithms, it can be applied to other single-step, multistep, or multipoint algorithms using inverses of linear operators along the same lines.
{"title":"Hybrid Newton-like Inverse Free Algorithms for Solving Nonlinear Equations","authors":"Ioannis K. Argyros, S. George, Samundra Regmi, Christopher I. Argyros","doi":"10.3390/a17040154","DOIUrl":"https://doi.org/10.3390/a17040154","url":null,"abstract":"Iterative algorithms requiring the computationally expensive in general inversion of linear operators are difficult to implement. This is the reason why hybrid Newton-like algorithms without inverses are developed in this paper to solve Banach space-valued nonlinear equations. The inverses of the linear operator are exchanged by a finite sum of fixed linear operators. Two types of convergence analysis are presented for these algorithms: the semilocal and the local. The Fréchet derivative of the operator on the equation is controlled by a majorant function. The semi-local analysis also relies on majorizing sequences. The celebrated contraction mapping principle is utilized to study the convergence of the Krasnoselskij-like algorithm. The numerical experimentation demonstrates that the new algorithms are essentially as effective but less expensive to implement. Although the new approach is demonstrated for Newton-like algorithms, it can be applied to other single-step, multistep, or multipoint algorithms using inverses of linear operators along the same lines.","PeriodicalId":502609,"journal":{"name":"Algorithms","volume":"43 3 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140716864","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}