Lattice-based Post-quantum cryptography and Homomorphic Encryption schemes have become the key methodologies for today’s and the future’s secure world. This comes at the cost of a vastly increased computational load due to the multiplication of wide-integer coefficient polynomials. NIST recommends number theoretic transform (NTT) as an efficient remedy. Nevertheless, NTT strongly requires acceleration for large numbers of coefficients. This letter explores the use of systolic arrays as NTT accelerators and finds an optimal hardware architecture configuration across problem sizes. Design-space exploration is performed, resulting in a new design configuration for an efficient 2-D NTT accelerator without losing the ability to execute other workloads. Our finding indicates that for 22−nm technology, an optimal systolic array accelerator requires an area of 53.04 mm2. The accelerator can efficiently execute and apply NTT on a polynomial with 4096 32-bit integer coefficients requiring 3296 cycles, and 1794.92 nJ.
{"title":"Optimizing Systolic Array-Based NTT Accelerators","authors":"Saleh Mulhem;Eike Schultz;Lukas Groth;Mladen Berekovic;Rainer Buchty","doi":"10.1109/LES.2025.3562707","DOIUrl":"https://doi.org/10.1109/LES.2025.3562707","url":null,"abstract":"Lattice-based Post-quantum cryptography and Homomorphic Encryption schemes have become the key methodologies for today’s and the future’s secure world. This comes at the cost of a vastly increased computational load due to the multiplication of wide-integer coefficient polynomials. NIST recommends number theoretic transform (NTT) as an efficient remedy. Nevertheless, NTT strongly requires acceleration for large numbers of coefficients. This letter explores the use of systolic arrays as NTT accelerators and finds an optimal hardware architecture configuration across problem sizes. Design-space exploration is performed, resulting in a new design configuration for an efficient 2-D NTT accelerator without losing the ability to execute other workloads. Our finding indicates that for 22−nm technology, an optimal systolic array accelerator requires an area of 53.04 mm2. The accelerator can efficiently execute and apply NTT on a polynomial with 4096 32-bit integer coefficients requiring 3296 cycles, and 1794.92 nJ.","PeriodicalId":56143,"journal":{"name":"IEEE Embedded Systems Letters","volume":"18 1","pages":"7-10"},"PeriodicalIF":2.0,"publicationDate":"2025-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10974580","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146162246","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The sparsity of spike generation in neuromorphic computing establishes asynchronous communication as inherently more suited than serializer/deserializer (SerDes) for handling sparse interchip transmissions. However, traditional asynchronous methods often require more wires than the data bit width due to data encoding schemes. This letter introduces a novel method, detecting data stable receiver-transmitter (D2SRT), for interchip communication in neuromorphic chips. By detecting data stability, D2SRT achieves low power and high performance, with a single-bit energy consumption of 13.9 pJ and throughput of 217.2 Mb/s, surpassing traditional methods. Experimental results show that D2SRT meets the bandwidth requirements for most spiking neural networks (SNNs) and achieves exceptionally low dynamic power consumption.
{"title":"An Interchip Communication Method Suitable for Neuromorphic Chips by Detecting Data Stability","authors":"Jiaxu Cong;Jingyu Wang;Xiqin Tang;Bin Tong;Delong Shang","doi":"10.1109/LES.2025.3562862","DOIUrl":"https://doi.org/10.1109/LES.2025.3562862","url":null,"abstract":"The sparsity of spike generation in neuromorphic computing establishes asynchronous communication as inherently more suited than serializer/deserializer (SerDes) for handling sparse interchip transmissions. However, traditional asynchronous methods often require more wires than the data bit width due to data encoding schemes. This letter introduces a novel method, detecting data stable receiver-transmitter (D2SRT), for interchip communication in neuromorphic chips. By detecting data stability, D2SRT achieves low power and high performance, with a single-bit energy consumption of 13.9 pJ and throughput of 217.2 Mb/s, surpassing traditional methods. Experimental results show that D2SRT meets the bandwidth requirements for most spiking neural networks (SNNs) and achieves exceptionally low dynamic power consumption.","PeriodicalId":56143,"journal":{"name":"IEEE Embedded Systems Letters","volume":"18 1","pages":"3-6"},"PeriodicalIF":2.0,"publicationDate":"2025-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146162242","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-17DOI: 10.1109/LES.2025.3561870
Bidyut Saha;Riya Samanta;Ram Babu Roy;Soumya K. Ghosh
We present tiny time-series neural architecture search (TinyTNAS), a hardware-aware neural architecture search (NAS) framework optimized for efficient execution on CPUs, eliminating the need for costly GPUs. Traditional NAS methods often depend on reinforcement learning or evolutionary algorithms, requiring significant GPU resources and search time, which may be inaccessible to many machine learning researchers and practitioners. TinyTNAS addresses these limitations with an intelligent grid search approach that drastically reduces search time from hours to minutes, operating seamlessly on CPUs. It enables scalable model generation tailored for resource-constrained devices, optimizing neural networks within stringent constraints on RAM, Flash, and MAC operations. TinyTNAS also supports time-bound searches, ensuring rapid and efficient architecture discovery. Experiments on benchmark datasets, including UCIHAR, PAMAP2, WISDM, MIT-BIH, and PTB-ECG, demonstrate its ability to achieve state-of-the-art accuracy while significantly reducing resource usage and latency compared to expert-designed architectures. Furthermore, it surpasses GPU-dependent hardware-aware NAS methods based on reinforcement learning and evolutionary algorithms by drastically reducing search time. The code is publicly available at https://github.com/BidyutSaha/TinyTNAS.git.
{"title":"TinyTNAS: Time-Bound, GPU-Independent Hardware-Aware Neural Architecture Search for TinyML Time-Series Classification","authors":"Bidyut Saha;Riya Samanta;Ram Babu Roy;Soumya K. Ghosh","doi":"10.1109/LES.2025.3561870","DOIUrl":"https://doi.org/10.1109/LES.2025.3561870","url":null,"abstract":"We present tiny time-series neural architecture search (TinyTNAS), a hardware-aware neural architecture search (NAS) framework optimized for efficient execution on CPUs, eliminating the need for costly GPUs. Traditional NAS methods often depend on reinforcement learning or evolutionary algorithms, requiring significant GPU resources and search time, which may be inaccessible to many machine learning researchers and practitioners. TinyTNAS addresses these limitations with an intelligent grid search approach that drastically reduces search time from hours to minutes, operating seamlessly on CPUs. It enables scalable model generation tailored for resource-constrained devices, optimizing neural networks within stringent constraints on RAM, Flash, and MAC operations. TinyTNAS also supports time-bound searches, ensuring rapid and efficient architecture discovery. Experiments on benchmark datasets, including UCIHAR, PAMAP2, WISDM, MIT-BIH, and PTB-ECG, demonstrate its ability to achieve state-of-the-art accuracy while significantly reducing resource usage and latency compared to expert-designed architectures. Furthermore, it surpasses GPU-dependent hardware-aware NAS methods based on reinforcement learning and evolutionary algorithms by drastically reducing search time. The code is publicly available at <uri>https://github.com/BidyutSaha/TinyTNAS.git</uri>.","PeriodicalId":56143,"journal":{"name":"IEEE Embedded Systems Letters","volume":"18 1","pages":"69-72"},"PeriodicalIF":2.0,"publicationDate":"2025-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146162247","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-14DOI: 10.1109/LES.2025.3560428
Javier Barrera;Leonidas Kosmidis;Jaume Abella;Francisco J. Cazorla
The search for resource isolation and segregation is relentless to increase execution time determinism (ETD) as an essential feature for a successful certification process in embedded critical domains like automotive. In those domains, the advent of advanced software-controlled features requires unprecedented computing performance that calls for the use of acceleration hardware, with GPUs having a prominent position. The latest NVIDIA GPUs generations (Ampere, Hopper, Blackwell) feature multi-instance GPU (MIG), a mechanism that allows partitioning the GPU into fully isolated GPU instances, each with its own memory, cache, and computing resources. Despite the clear benefits of MIG on ETD, the latest NVIDIA automotive GPUs do not implement it. In this work, we first empirically analyze the benefits of MIG in a nonautomotive GPU showing the main traits in its use to improve ETD. And second, we identify the potential reasons precluding the deployment of MIG for automotive GPUs: automotive market specific needs, and the difference between GPU and memory technologies used in high-performance GPUs, which implement MIG, and automotive GPUs that lack it.
{"title":"Assessing the Use of NVIDIA Multi-Instance GPU in the Automotive Domain","authors":"Javier Barrera;Leonidas Kosmidis;Jaume Abella;Francisco J. Cazorla","doi":"10.1109/LES.2025.3560428","DOIUrl":"https://doi.org/10.1109/LES.2025.3560428","url":null,"abstract":"The search for resource isolation and segregation is relentless to increase execution time determinism (ETD) as an essential feature for a successful certification process in embedded critical domains like automotive. In those domains, the advent of advanced software-controlled features requires unprecedented computing performance that calls for the use of acceleration hardware, with GPUs having a prominent position. The latest NVIDIA GPUs generations (Ampere, Hopper, Blackwell) feature multi-instance GPU (MIG), a mechanism that allows partitioning the GPU into fully isolated GPU instances, each with its own memory, cache, and computing resources. Despite the clear benefits of MIG on ETD, the latest NVIDIA automotive GPUs do not implement it. In this work, we first empirically analyze the benefits of MIG in a nonautomotive GPU showing the main traits in its use to improve ETD. And second, we identify the potential reasons precluding the deployment of MIG for automotive GPUs: automotive market specific needs, and the difference between GPU and memory technologies used in high-performance GPUs, which implement MIG, and automotive GPUs that lack it.","PeriodicalId":56143,"journal":{"name":"IEEE Embedded Systems Letters","volume":"18 1","pages":"64-68"},"PeriodicalIF":2.0,"publicationDate":"2025-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146162243","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-09DOI: 10.1109/LES.2025.3559208
Chithambara Moorthii J.;Deepak Verma;Richa Mishra;Harshit Bansal;Sounak Dey;Arijit Mukherjee;Arpan Pal;Manan Suri
Several applications, ranging from artificial intelligence to encryption, require dense multibit matrix multiplications. With the advent of big-data applications and edge deployment, a recent paradigm shift focuses on energy-efficient computation methodologies such as In-Memory Computing (IMC). In this work, we propose Typo SRAM IMC-based Multibit Multiplication with Analog Carry Computation (SIMMAC), a novel 8T SRAM-based IMC accelerator for multibit multiplication with reconfigurable bit-precision. To address the present-day challenges of IMC architectures, we propose a novel input and weight mapping strategy along with analog carry addition for in-memory computation. The proposed input and weight mapping strategy renders the implementation to be DAC-less, hence boosting the performance of the IMC Macro in terms of area and power. The novel analog carry addition methodology computes the multibit product within the IMC Macro, eliminating the need for peripheral digital shift-and-add circuits. With the proposed Convolutional Neural Network(CNN) workload mapping analyzed in this study, our architecture executes the Matrix Vector Multiplication (MVM) across all tiles in a single product cycle of 40ns. Our architecture achieves 98% accuracy for MNIST classification and 819.2 GOPS and 56.5 TOPS/W at 200 MHz operating frequency at TSMC 65 nm technology node.
{"title":"SIMMAC: SRAM IMC-Based Multibit Multiplication With Analog Carry Computation","authors":"Chithambara Moorthii J.;Deepak Verma;Richa Mishra;Harshit Bansal;Sounak Dey;Arijit Mukherjee;Arpan Pal;Manan Suri","doi":"10.1109/LES.2025.3559208","DOIUrl":"https://doi.org/10.1109/LES.2025.3559208","url":null,"abstract":"Several applications, ranging from artificial intelligence to encryption, require dense multibit matrix multiplications. With the advent of big-data applications and edge deployment, a recent paradigm shift focuses on energy-efficient computation methodologies such as In-Memory Computing (IMC). In this work, we propose Typo SRAM IMC-based Multibit Multiplication with Analog Carry Computation (SIMMAC), a novel 8T SRAM-based IMC accelerator for multibit multiplication with reconfigurable bit-precision. To address the present-day challenges of IMC architectures, we propose a novel input and weight mapping strategy along with analog carry addition for in-memory computation. The proposed input and weight mapping strategy renders the implementation to be DAC-less, hence boosting the performance of the IMC Macro in terms of area and power. The novel analog carry addition methodology computes the multibit product within the IMC Macro, eliminating the need for peripheral digital shift-and-add circuits. With the proposed Convolutional Neural Network(CNN) workload mapping analyzed in this study, our architecture executes the Matrix Vector Multiplication (MVM) across all tiles in a single product cycle of 40ns. Our architecture achieves 98% accuracy for MNIST classification and 819.2 GOPS and 56.5 TOPS/W at 200 MHz operating frequency at TSMC 65 nm technology node.","PeriodicalId":56143,"journal":{"name":"IEEE Embedded Systems Letters","volume":"18 1","pages":"31-35"},"PeriodicalIF":2.0,"publicationDate":"2025-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146162245","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In compute-in-memory (CIM)-based system, the weights of neural network models need to be mapped to the memory array, and the mapping policy has a huge impact on the performance of the system. In this letter, existing weight mapping policies are analysed and categorized into two types: 1) position-wise and 2) channel-wise mapping. Channel-wise policy causes noncontinuous memory address at its output, while position-wise policy suffers from its large amount of input data and the problem of data concatenation. After that, a novel weight mapping policy, named as mixed dimension mapping, is proposed to overcome the limitations of the existing policies. Experimental results show that it reduces the communication load of the system by 14%–32% and avoids data concatenation completely.
{"title":"Optimizing Internal Communication of Compute-in-Memory-Based AI Accelerator","authors":"Letian Huang;Wenxu Cao;Linhan Sun;Zeyu Li;Ruitai Wang;Junyi Song;Shuyan Jiang","doi":"10.1109/LES.2025.3559333","DOIUrl":"https://doi.org/10.1109/LES.2025.3559333","url":null,"abstract":"In compute-in-memory (CIM)-based system, the weights of neural network models need to be mapped to the memory array, and the mapping policy has a huge impact on the performance of the system. In this letter, existing weight mapping policies are analysed and categorized into two types: 1) position-wise and 2) channel-wise mapping. Channel-wise policy causes noncontinuous memory address at its output, while position-wise policy suffers from its large amount of input data and the problem of data concatenation. After that, a novel weight mapping policy, named as mixed dimension mapping, is proposed to overcome the limitations of the existing policies. Experimental results show that it reduces the communication load of the system by 14%–32% and avoids data concatenation completely.","PeriodicalId":56143,"journal":{"name":"IEEE Embedded Systems Letters","volume":"18 1","pages":"81-84"},"PeriodicalIF":2.0,"publicationDate":"2025-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146162208","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-07DOI: 10.1109/LES.2025.3558314
Milanpreet Kaur;Karminder Singh;Suman Kumar
In modern computing environments, heterogeneous multicore processors are increasingly used to balance performance and energy efficiency. However, as processor architectures become more complex and workloads increase, traditional dynamic voltage and frequency scaling (DVFS) methods face challenges in ensuring thermal stability without significant performance tradeoffs in high-performance computing applications. This letter proposes a scalable adaptive thermal management framework that leverages phase-based thermal detection of workloads alongside adaptive migration techniques. The framework dynamically detects the thermal phases of running applications and optimally allocates tasks and threads across cores based on thermal characteristics and workload demands. The proposed framework on the Apalis iMX8, evaluated using the PARSEC benchmark, reduces average and peak temperatures by $16.7^{circ } C$ and $32.5^{circ } C$ , respectively, while enhancing performance by 13.5% compared to DVFS-based dynamic thermal management techniques. It also outperforms methods, such as compiler-assisted reinforcement learning for thermal-aware task scheduling and DVFS and PTS, demonstrating superior efficiency and adaptability in thermal management.
{"title":"Adaptive Behavior-Driven Thermal Management Framework in Heterogeneous Multicore Processors","authors":"Milanpreet Kaur;Karminder Singh;Suman Kumar","doi":"10.1109/LES.2025.3558314","DOIUrl":"https://doi.org/10.1109/LES.2025.3558314","url":null,"abstract":"In modern computing environments, heterogeneous multicore processors are increasingly used to balance performance and energy efficiency. However, as processor architectures become more complex and workloads increase, traditional dynamic voltage and frequency scaling (DVFS) methods face challenges in ensuring thermal stability without significant performance tradeoffs in high-performance computing applications. This letter proposes a scalable adaptive thermal management framework that leverages phase-based thermal detection of workloads alongside adaptive migration techniques. The framework dynamically detects the thermal phases of running applications and optimally allocates tasks and threads across cores based on thermal characteristics and workload demands. The proposed framework on the Apalis iMX8, evaluated using the PARSEC benchmark, reduces average and peak temperatures by <inline-formula> <tex-math>$16.7^{circ } C$ </tex-math></inline-formula> and <inline-formula> <tex-math>$32.5^{circ } C$ </tex-math></inline-formula>, respectively, while enhancing performance by 13.5% compared to DVFS-based dynamic thermal management techniques. It also outperforms methods, such as compiler-assisted reinforcement learning for thermal-aware task scheduling and DVFS and PTS, demonstrating superior efficiency and adaptability in thermal management.","PeriodicalId":56143,"journal":{"name":"IEEE Embedded Systems Letters","volume":"18 1","pages":"40-43"},"PeriodicalIF":2.0,"publicationDate":"2025-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146162209","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-29DOI: 10.1109/LES.2025.3575017
Joel A. Quevedo;Yazmin Maldonado
Efficient early stages exploration in hardware design can significantly enhance design quality and reduce development time. This letter introduces a novel methodology that leverages the multilevel intermediate representation (MLIR) to extract control and data flow graphs (CDFGs) with early-stage resource estimates for area, delay, and power consumption. By employing Polygeist for C-to-MLIR conversion coupled with Graphviz for visualization, we generate structured CDFGs and then apply three scheduling algorithms: 1) ASAP; 2) ALAP; and 3) Random. Evaluated through two case studies, our approach produces valid scheduled graphs and demonstrates how both code structure and scheduling strategy critically impact hardware resource utilization and performance. This work sets the stage for resource-aware design space exploration using MLIR, enabling designers to evaluate configurations and make informed tradeoffs prior to time-consuming synthesis processes.
{"title":"From MLIR to Scheduled CDFG: A Design Flow for Hardware Resource Estimation","authors":"Joel A. Quevedo;Yazmin Maldonado","doi":"10.1109/LES.2025.3575017","DOIUrl":"https://doi.org/10.1109/LES.2025.3575017","url":null,"abstract":"Efficient early stages exploration in hardware design can significantly enhance design quality and reduce development time. This letter introduces a novel methodology that leverages the multilevel intermediate representation (MLIR) to extract control and data flow graphs (CDFGs) with early-stage resource estimates for area, delay, and power consumption. By employing Polygeist for C-to-MLIR conversion coupled with Graphviz for visualization, we generate structured CDFGs and then apply three scheduling algorithms: 1) ASAP; 2) ALAP; and 3) Random. Evaluated through two case studies, our approach produces valid scheduled graphs and demonstrates how both code structure and scheduling strategy critically impact hardware resource utilization and performance. This work sets the stage for resource-aware design space exploration using MLIR, enabling designers to evaluate configurations and make informed tradeoffs prior to time-consuming synthesis processes.","PeriodicalId":56143,"journal":{"name":"IEEE Embedded Systems Letters","volume":"17 6","pages":"423-426"},"PeriodicalIF":2.0,"publicationDate":"2025-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145778303","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Collaborative robotic exploration relies on multiple robots working together to survey an unknown environment. This letter presents the implementation of a collaborative fleet of robots designed to perform autonomous 2-D indoor mapping. The main contributions are: 1) an original solution to the problem of distributing multiple tasks among multiple robots, implemented as a distributed version of the auction mechanism and 2) the release of the code through a public repository.
{"title":"Indoor Collaborative Robot Exploration: A Distributed Market-Based Approach","authors":"Ricardo Ercoli;Fausto Navadian;Joaquin Urrisa;Pablo Monzón;Facundo Benavides","doi":"10.1109/LES.2025.3555547","DOIUrl":"https://doi.org/10.1109/LES.2025.3555547","url":null,"abstract":"Collaborative robotic exploration relies on multiple robots working together to survey an unknown environment. This letter presents the implementation of a collaborative fleet of robots designed to perform autonomous 2-D indoor mapping. The main contributions are: 1) an original solution to the problem of distributing multiple tasks among multiple robots, implemented as a distributed version of the auction mechanism and 2) the release of the code through a public repository.","PeriodicalId":56143,"journal":{"name":"IEEE Embedded Systems Letters","volume":"17 6","pages":"402-405"},"PeriodicalIF":2.0,"publicationDate":"2025-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145778183","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}