The rapid deployment and low-cost inference of controller area network (CAN) bus anomaly detection models on intelligent vehicles can drive the development of the Green Internet of Vehicles. Anomaly detection on intelligent vehicles often utilizes recurrent neural network models, but computational resources for these models are limited on small platforms. Model compression is essential to ensure CAN bus security with restricted computing resources while improving model computation efficiency. However, the existence of shared cyclic units significantly constrains the compression of recurrent neural networks. In this study, we propose a structured pruning method for long short-term memory (LSTM) based on the contribution values of shared vectors. By analyzing the contribution value of each dimension of shared vectors, the weight matrix of the model is structurally pruned, and the output value of the LSTM layer is supplemented to maintain the information integrity between adjacent network layers. We further propose an approximate matrix multiplication calculation module that runs in the whole process of model calculation and is deployed in parallel with the pruning module. Evaluated on a realistic public CAN bus dataset, our method effectively achieves highly structured pruning, improves model computing efficiency, and maintains performance stability compared to other compression methods.
{"title":"LSTM-Based Model Compression for CAN Security in Intelligent Vehicles","authors":"Yuan Feng;Yingxu Lai;Ye Chen;Zhaoyi Zhang;Jingwen Wei","doi":"10.1109/TAI.2024.3438110","DOIUrl":"https://doi.org/10.1109/TAI.2024.3438110","url":null,"abstract":"The rapid deployment and low-cost inference of controller area network (CAN) bus anomaly detection models on intelligent vehicles can drive the development of the Green Internet of Vehicles. Anomaly detection on intelligent vehicles often utilizes recurrent neural network models, but computational resources for these models are limited on small platforms. Model compression is essential to ensure CAN bus security with restricted computing resources while improving model computation efficiency. However, the existence of shared cyclic units significantly constrains the compression of recurrent neural networks. In this study, we propose a structured pruning method for long short-term memory (LSTM) based on the contribution values of shared vectors. By analyzing the contribution value of each dimension of shared vectors, the weight matrix of the model is structurally pruned, and the output value of the LSTM layer is supplemented to maintain the information integrity between adjacent network layers. We further propose an approximate matrix multiplication calculation module that runs in the whole process of model calculation and is deployed in parallel with the pruning module. Evaluated on a realistic public CAN bus dataset, our method effectively achieves highly structured pruning, improves model computing efficiency, and maintains performance stability compared to other compression methods.","PeriodicalId":73305,"journal":{"name":"IEEE transactions on artificial intelligence","volume":"5 12","pages":"6457-6471"},"PeriodicalIF":0.0,"publicationDate":"2024-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142810725","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-01DOI: 10.1109/TAI.2024.3436538
Renato Sortino;Thomas Cecconello;Andrea De Marco;Giuseppe Fiameni;Andrea Pilzer;Daniel Magro;Andrew M. Hopkins;Simone Riggi;Eva Sciacca;Adriano Ingallinera;Cristobal Bordiu;Filomena Bufano;Concetto Spampinato
Along with the nearing completion of the square kilometer array (SKA), comes an increasing demand for accurate and reliable automated solutions to extract valuable information from the vast amount of data it will allow acquiring. Automated source finding is a particularly important task in this context, as it enables the detection and classification of astronomical objects. Deep-learning-based object detection and semantic segmentation models have proven to be suitable for this purpose. However, training such deep networks requires a high volume of labeled data, which is not trivial to obtain in the context of radio astronomy. Since data needs to be manually labeled by experts, this process is not scalable to large dataset sizes, limiting the possibilities of leveraging deep networks to address several tasks. In this work, we propose RADiff, a generative approach based on conditional diffusion models trained over an annotated radio dataset to generate synthetic images, containing radio sources of different morphologies, to augment existing datasets and reduce the problems caused by class imbalances. We also show that it is possible to generate fully synthetic image-annotation pairs to automatically augment any annotated dataset. We evaluate the effectiveness of this approach by training a semantic segmentation model on a real dataset augmented in two ways: 1) using synthetic images obtained from real masks; and 2) generating images from synthetic semantic masks. Finally, we also show how the model can be applied to populate background noise maps for simulating radio maps for data challenges.
{"title":"RADiff: Controllable Diffusion Models for Radio Astronomical Maps Generation","authors":"Renato Sortino;Thomas Cecconello;Andrea De Marco;Giuseppe Fiameni;Andrea Pilzer;Daniel Magro;Andrew M. Hopkins;Simone Riggi;Eva Sciacca;Adriano Ingallinera;Cristobal Bordiu;Filomena Bufano;Concetto Spampinato","doi":"10.1109/TAI.2024.3436538","DOIUrl":"https://doi.org/10.1109/TAI.2024.3436538","url":null,"abstract":"Along with the nearing completion of the square kilometer array (SKA), comes an increasing demand for accurate and reliable automated solutions to extract valuable information from the vast amount of data it will allow acquiring. Automated source finding is a particularly important task in this context, as it enables the detection and classification of astronomical objects. Deep-learning-based object detection and semantic segmentation models have proven to be suitable for this purpose. However, training such deep networks requires a high volume of labeled data, which is not trivial to obtain in the context of radio astronomy. Since data needs to be manually labeled by experts, this process is not scalable to large dataset sizes, limiting the possibilities of leveraging deep networks to address several tasks. In this work, we propose RADiff, a generative approach based on conditional diffusion models trained over an annotated radio dataset to generate synthetic images, containing radio sources of different morphologies, to augment existing datasets and reduce the problems caused by class imbalances. We also show that it is possible to generate fully synthetic image-annotation pairs to automatically augment any annotated dataset. We evaluate the effectiveness of this approach by training a semantic segmentation model on a real dataset augmented in two ways: 1) using synthetic images obtained from real masks; and 2) generating images from synthetic semantic masks. Finally, we also show how the model can be applied to populate background noise maps for simulating radio maps for data challenges.","PeriodicalId":73305,"journal":{"name":"IEEE transactions on artificial intelligence","volume":"5 12","pages":"6524-6535"},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142825894","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Horizontal federated learning (HFL) exhibits substantial similarities in feature space across distinct clients. However, not all features contribute significantly to the training of the global model. Moreover, the curse of dimensionality delays the training. Therefore, reducing irrelevant and redundant features from the feature space makes training faster and inexpensive. This work aims to identify the common feature subset from the clients in federated settings. We introduce a hybrid approach called Fed-MOFS, 1