Vision-and-language navigation (VLN) requires agents to follow natural language instructions and navigate in previously unseen environments. Large language models (LLMs) bring strong reasoning and generalization abilities to VLN. However, existing LLM-based methods still suffer from information loss in vision-to-text conversion, rigid prompting, or manually designed reasoning steps, as well as weak closed-loop control, leading to error accumulation. We propose DeepVLN, an LLM-based VLN framework that enables autonomous chain-of-thought (CoT) reasoning and adaptive navigation. DeepVLN first adapts an open-source LLM to VLN via a three-stage supervised fine-tuning pipeline, and then further optimizes its closed-loop policy with reinforcement learning, encouraging robust feedback use and online error correction without predefined reasoning templates. Additionally, an API-based collaborative reasoning module enables a lightweight local agent to selectively query a stronger cloud LLM under high uncertainty, thereby balancing performance and computational cost. Experiments on R2R, RxR, and REVERIE show that DeepVLN achieves competitive or superior results to strong VLN-specific and LLM-based baselines, with higher success rates and path efficiency in most settings. These results demonstrate the effectiveness of equipping LLMs with autonomous, closed-loop reasoning for embodied navigation.
{"title":"DeepVLN: Vision-and-Language Navigation via Deep Reasoning and Collaborative Mechanisms Based on Large Language Models","authors":"Peng Gao;Peng Wang;Fei Wang;Hamido Fujita;Hanan Aljuaid;Jun-Liang Shang","doi":"10.1109/JSTSP.2026.3652407","DOIUrl":"https://doi.org/10.1109/JSTSP.2026.3652407","url":null,"abstract":"Vision-and-language navigation (VLN) requires agents to follow natural language instructions and navigate in previously unseen environments. Large language models (LLMs) bring strong reasoning and generalization abilities to VLN. However, existing LLM-based methods still suffer from information loss in vision-to-text conversion, rigid prompting, or manually designed reasoning steps, as well as weak closed-loop control, leading to error accumulation. We propose DeepVLN, an LLM-based VLN framework that enables autonomous chain-of-thought (CoT) reasoning and adaptive navigation. DeepVLN first adapts an open-source LLM to VLN via a three-stage supervised fine-tuning pipeline, and then further optimizes its closed-loop policy with reinforcement learning, encouraging robust feedback use and online error correction without predefined reasoning templates. Additionally, an API-based collaborative reasoning module enables a lightweight local agent to selectively query a stronger cloud LLM under high uncertainty, thereby balancing performance and computational cost. Experiments on R2R, RxR, and REVERIE show that DeepVLN achieves competitive or superior results to strong VLN-specific and LLM-based baselines, with higher success rates and path efficiency in most settings. These results demonstrate the effectiveness of equipping LLMs with autonomous, closed-loop reasoning for embodied navigation.","PeriodicalId":13038,"journal":{"name":"IEEE Journal of Selected Topics in Signal Processing","volume":"20 1","pages":"47-62"},"PeriodicalIF":13.7,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147274997","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-31DOI: 10.1109/JSTSP.2025.3649959
Zihan Zhang;Thierry Blu
This paper introduces a blind source separation approach based on a “source exclusion” principle for hyperspectral image unmixing (HSU). We define the exclusion mathematically as a metric quantifying the global purity of the pixels. We then develop an efficient algorithm to minimize this criterion (Weak Exclusion Principle—WEP), and devise a convex optimization strategy (WEP+) to enforce sum-to-one and non-negativity, which are common constraints for hyperspectral sources. Through comprehensive experimental validations against standard and state-of-the-art unmixing algorithms on synthetic and real-world datasets, we demonstrate the superior accuracy and computational efficiency of our WEP+ solution.
{"title":"On the Exclusion of Hyperspectral Sources","authors":"Zihan Zhang;Thierry Blu","doi":"10.1109/JSTSP.2025.3649959","DOIUrl":"https://doi.org/10.1109/JSTSP.2025.3649959","url":null,"abstract":"This paper introduces a blind source separation approach based on a “source exclusion” principle for hyperspectral image unmixing (HSU). We define the exclusion mathematically as a metric quantifying the global purity of the pixels. We then develop an efficient algorithm to minimize this criterion (Weak Exclusion Principle—WEP), and devise a convex optimization strategy (WEP+) to enforce sum-to-one and non-negativity, which are common constraints for hyperspectral sources. Through comprehensive experimental validations against standard and state-of-the-art unmixing algorithms on synthetic and real-world datasets, we demonstrate the superior accuracy and computational efficiency of our WEP+ solution.","PeriodicalId":13038,"journal":{"name":"IEEE Journal of Selected Topics in Signal Processing","volume":"19 8","pages":"1983-1995"},"PeriodicalIF":13.7,"publicationDate":"2025-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146122766","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Collaborative multi-agent sensing via Vehicle-to-Vehicle (V2V) and Vehicle-to-Everything (V2X) communication has emerged as a promising solution to overcome the limitations of single-vehicle perception in autonomous driving. However, efficiently fusing large-scale, high-dimensional features across multiple vehicles remains a significant challenge, particularly under communication localization constraints. This paper proposes a collaborative 3D object detection framework, termed Collaborative 3D Detection with Multiscale Clustering Mamba (CoMCM). It performs adaptive feature fusion across multiple spatial scales, integrating both coarse- and fine-grained information. CoMCM comprises two core components: the Contextual Clustering Mamba (CCMamba) and a collaborative Mamba in BEV fusion module(CoM). The MCMamba module incorporates multiscale clustering within a Mamba-based state-space model to capture both global and local contextual information. This design addresses the limitations of selective state modeling in non-causal BEV representations. The CoM module further fuses BEV features from multiple Connected and Automated Vehicles (CAVs) using relative pose-aware attention and adaptive weighting, enabling effective multi-vehicle collaboration. Extensive experiments on large-scale datasets, OPV2V, V2XSet and the real-world DAIR-V2X datasets, demonstrate that CoMCM significantly outperforms existing collaborative 3D object detection methods and remains robust under bandwidth limitations and pose estimation errors. Moreover, CoMCM achieves low computational cost while maintaining high detection accuracy. CoMCM lays the foundation for scalable and accurate collaborative perception in intelligent connected vehicle systems operating in complex environments.
{"title":"CoMCM:Collaborative 3D Detection With Multiscale Clustering Mamba","authors":"Tong Wang;Jie Guo;Ming Ouyang;Peng Xue;Lu Wang;Pei Xiao","doi":"10.1109/JSTSP.2025.3650028","DOIUrl":"https://doi.org/10.1109/JSTSP.2025.3650028","url":null,"abstract":"Collaborative multi-agent sensing via Vehicle-to-Vehicle (V2V) and Vehicle-to-Everything (V2X) communication has emerged as a promising solution to overcome the limitations of single-vehicle perception in autonomous driving. However, efficiently fusing large-scale, high-dimensional features across multiple vehicles remains a significant challenge, particularly under communication localization constraints. This paper proposes a collaborative 3D object detection framework, termed Collaborative 3D Detection with Multiscale Clustering Mamba (CoMCM). It performs adaptive feature fusion across multiple spatial scales, integrating both coarse- and fine-grained information. CoMCM comprises two core components: the Contextual Clustering Mamba (CCMamba) and a collaborative Mamba in BEV fusion module(CoM). The MCMamba module incorporates multiscale clustering within a Mamba-based state-space model to capture both global and local contextual information. This design addresses the limitations of selective state modeling in non-causal BEV representations. The CoM module further fuses BEV features from multiple Connected and Automated Vehicles (CAVs) using relative pose-aware attention and adaptive weighting, enabling effective multi-vehicle collaboration. Extensive experiments on large-scale datasets, OPV2V, V2XSet and the real-world DAIR-V2X datasets, demonstrate that CoMCM significantly outperforms existing collaborative 3D object detection methods and remains robust under bandwidth limitations and pose estimation errors. Moreover, CoMCM achieves low computational cost while maintaining high detection accuracy. CoMCM lays the foundation for scalable and accurate collaborative perception in intelligent connected vehicle systems operating in complex environments.","PeriodicalId":13038,"journal":{"name":"IEEE Journal of Selected Topics in Signal Processing","volume":"19 8","pages":"1996-2009"},"PeriodicalIF":13.7,"publicationDate":"2025-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146122798","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-31DOI: 10.1109/JSTSP.2025.3647358
{"title":"IEEE Signal Processing Society Information","authors":"","doi":"10.1109/JSTSP.2025.3647358","DOIUrl":"https://doi.org/10.1109/JSTSP.2025.3647358","url":null,"abstract":"","PeriodicalId":13038,"journal":{"name":"IEEE Journal of Selected Topics in Signal Processing","volume":"19 7","pages":"C3-C3"},"PeriodicalIF":13.7,"publicationDate":"2025-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11320991","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145861203","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-30DOI: 10.1109/JSTSP.2025.3644519
{"title":"IEEE Signal Processing Society Information","authors":"","doi":"10.1109/JSTSP.2025.3644519","DOIUrl":"https://doi.org/10.1109/JSTSP.2025.3644519","url":null,"abstract":"","PeriodicalId":13038,"journal":{"name":"IEEE Journal of Selected Topics in Signal Processing","volume":"19 6","pages":"C3-C3"},"PeriodicalIF":13.7,"publicationDate":"2025-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11319270","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145852465","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-24DOI: 10.1109/JSTSP.2025.3648286
Son T. Huynh;Hai X. Nguyen;Dat Q. Duong;Tien K. Nguyen;Nhi K. P. Nguyen;Hoang Tran Le Nguyen;Phuong N. Vu;Nghi P. G. Nguyen;Khoi D. Vu;Hoa Q. N. Ho;Tuan-Anh Tran;Stephen Baker;Binh T. Nguyen
Antimicrobial resistance (AMR) has become a global health crisis, creating an urgent need for rapid and accurate bacterial identification to guide appropriate antibiotic therapy. Automated segmentation of bacteria in microscopic images enables quantitative analysis of cellular morphology and spatial patterns, offering valuable cues for early species recognition. Such image-based insights can accelerate diagnostic decisions and promote rational antibiotic use, ultimately mitigating the impact of AMR. Hence, we present a significantly expanded dataset of E. coli fluorescence images, extending that of Dat et al. (Duong et al., 2023) with additional samples and refined annotations to better support bacterial segmentation research. We extensively re-examine several popular segmentation models using standardised COCO metrics ($mAP$, $mAP_{0.50}$, $mAP_{0.75}$, $mAP_{text{small}}$) and find that methods originally designed for semantic segmentation struggling with instance-level tasks. On the other hand, two-stage detectors, especially Cascade Mask R-CNN (Vasconcelos et al., 2018) and its Hybrid Task Cascade (HTC) (Chen et al., 2019) variant, perform best when paired with modern backbone networks. In particular, our framework introduces a novel synergy between the ConvNeXtV2-Tiny backbone and the HTC architecture, enhanced by the CBAM (Kweon et al., 2018) attention mechanism (reduction ratio of eight), to achieve superior feature refinement and instance delineation. This configuration sets a new performance benchmark, achieving $mAP_{0.50} = 0.940$, $mAP_{0.75} = 0.882$, $mAP_{text{small}} = 0.788$, and $mAP_{0.50:0.95} = 0.787$, and directly supports the development of data-driven, clinically relevant microbial analysis systems that can guide evidence-based and timely antimicrobial treatment strategies.
{"title":"FluoEcoli-Instance: A High-Content Fluorescence Microscopy Dataset and CBAM-Enhanced Hybrid Task Cascade for E. Coli Instance Segmentation","authors":"Son T. Huynh;Hai X. Nguyen;Dat Q. Duong;Tien K. Nguyen;Nhi K. P. Nguyen;Hoang Tran Le Nguyen;Phuong N. Vu;Nghi P. G. Nguyen;Khoi D. Vu;Hoa Q. N. Ho;Tuan-Anh Tran;Stephen Baker;Binh T. Nguyen","doi":"10.1109/JSTSP.2025.3648286","DOIUrl":"https://doi.org/10.1109/JSTSP.2025.3648286","url":null,"abstract":"Antimicrobial resistance (AMR) has become a global health crisis, creating an urgent need for rapid and accurate bacterial identification to guide appropriate antibiotic therapy. Automated segmentation of bacteria in microscopic images enables quantitative analysis of cellular morphology and spatial patterns, offering valuable cues for early species recognition. Such image-based insights can accelerate diagnostic decisions and promote rational antibiotic use, ultimately mitigating the impact of AMR. Hence, we present a significantly expanded dataset of <italic>E. coli</i> fluorescence images, extending that of Dat et al. (Duong et al., 2023) with additional samples and refined annotations to better support bacterial segmentation research. We extensively re-examine several popular segmentation models using standardised COCO metrics <bold>(<inline-formula><tex-math>$mAP$</tex-math></inline-formula>, <inline-formula><tex-math>$mAP_{0.50}$</tex-math></inline-formula>, <inline-formula><tex-math>$mAP_{0.75}$</tex-math></inline-formula>, <inline-formula><tex-math>$mAP_{text{small}}$</tex-math></inline-formula>)</b> and find that methods originally designed for semantic segmentation struggling with instance-level tasks. On the other hand, two-stage detectors, especially Cascade Mask R-CNN (Vasconcelos et al., 2018) and its Hybrid Task Cascade (HTC) (Chen et al., 2019) variant, perform best when paired with modern backbone networks. In particular, our framework introduces a novel synergy between the ConvNeXtV2-Tiny backbone and the HTC architecture, enhanced by the CBAM (Kweon et al., 2018) attention mechanism (reduction ratio of eight), to achieve superior feature refinement and instance delineation. This configuration sets a new performance benchmark, achieving <inline-formula><tex-math>$mAP_{0.50} = 0.940$</tex-math></inline-formula>, <inline-formula><tex-math>$mAP_{0.75} = 0.882$</tex-math></inline-formula>, <inline-formula><tex-math>$mAP_{text{small}} = 0.788$</tex-math></inline-formula>, and <inline-formula><tex-math>$mAP_{0.50:0.95} = 0.787$</tex-math></inline-formula>, and directly supports the development of data-driven, clinically relevant microbial analysis systems that can guide evidence-based and timely antimicrobial treatment strategies.","PeriodicalId":13038,"journal":{"name":"IEEE Journal of Selected Topics in Signal Processing","volume":"19 8","pages":"1967-1982"},"PeriodicalIF":13.7,"publicationDate":"2025-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146122732","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Speech understanding is essential for interpreting the diverse forms of information embedded in spoken language, including linguistic, paralinguistic, and non-linguistic cues that are vital for effective human-computer interaction. The rapid advancement of large language models (LLMs) has catalyzed the emergence of Speech Large Language Models (Speech LLMs), which marks a transformative shift toward general-purpose speech understanding systems. To further clarify and systematically delineate task objectives, in this paper, we formally define the concept of speech understanding and introduce a structured taxonomy encompassing its informational, functional, and format dimensions. Within this scope of definition, we present a comprehensive review of current Speech LLMs, analyzing their architectures through a three-stage abstraction: Modality Feature Extraction, Modality Information Fusion, and LLM Inference. In addition, we examine training strategies, discuss representative datasets, and review evaluation methodologies adopted in the field. Based on empirical analyses and experimental evidence, we identify two key challenges currently facing Speech LLMs—instruction sensitivity and degradation in semantic reasoning—and propose concrete directions for addressing these issues. Through this systematic and detailed survey, we aim to offer a foundational reference for researchers and practitioners working toward more robust, generalizable, and human-aligned Speech LLMs.
{"title":"A Survey on Speech Large Language Models for Understanding","authors":"Jing Peng;Yucheng Wang;Bohan Li;Yiwei Guo;Hankun Wang;YanGui Fang;Yu Xi;Haoyu Li;Xu Li;Ke Zhang;Shuai Wang;Kai Yu","doi":"10.1109/JSTSP.2025.3640535","DOIUrl":"https://doi.org/10.1109/JSTSP.2025.3640535","url":null,"abstract":"Speech understanding is essential for interpreting the diverse forms of information embedded in spoken language, including linguistic, paralinguistic, and non-linguistic cues that are vital for effective human-computer interaction. The rapid advancement of large language models (LLMs) has catalyzed the emergence of Speech Large Language Models (Speech LLMs), which marks a transformative shift toward general-purpose speech understanding systems. To further clarify and systematically delineate task objectives, in this paper, we formally define the concept of speech understanding and introduce a structured taxonomy encompassing its informational, functional, and format dimensions. Within this scope of definition, we present a comprehensive review of current Speech LLMs, analyzing their architectures through a three-stage abstraction: Modality Feature Extraction, Modality Information Fusion, and LLM Inference. In addition, we examine training strategies, discuss representative datasets, and review evaluation methodologies adopted in the field. Based on empirical analyses and experimental evidence, we identify two key challenges currently facing Speech LLMs—instruction sensitivity and degradation in semantic reasoning—and propose concrete directions for addressing these issues. Through this systematic and detailed survey, we aim to offer a foundational reference for researchers and practitioners working toward more robust, generalizable, and human-aligned Speech LLMs.","PeriodicalId":13038,"journal":{"name":"IEEE Journal of Selected Topics in Signal Processing","volume":"20 1","pages":"2-31"},"PeriodicalIF":13.7,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147274990","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-03DOI: 10.1109/JSTSP.2025.3539208
Seungnyun Kim;Moe Z. Win
Recently, wideband beamforming realized by extremely large-scale antenna array (ELAA) systems have garnered significant interest as a means to dramatically improve throughput of next generation (xG) networks. However, traditional phase shifter (PS)-based beamforming schemes might not work well in wideband ELAA systems due to the beam squint effect, where beams at different frequencies become misaligned. While the use of true time delay (TTD) can mitigate the beam squint effect by generating frequency-dependent beamforming vectors, the conventional TTD-based beamforming schemes suffer from significant array gain loss caused by the discrepancies between the desired directional beams and the generated beams. This beam misalignment issue becomes even more pronounced in the wideband ELAA systems due to the nonlinear near-field characteristics described by the spherical wave propagation. In this paper, we propose a wideband dynamic array-of-subarrays (WDAoSA) architecture that dynamically configures connections between TTDs and PS subarrays using a switch network. By optimizing the subarray connections as well as the TTD time delays and PS phase shifts, WDAoSA can effectively maximize the array gain of wideband ELAA systems. Numerical results demonstrate that WDAoSA achieves significant improvements in terms of array gain and spectral efficiency over the conventional TTD-based beamforming schemes.
{"title":"Dynamic Array-of-Subarrays Architecture for Wideband Multi-Antenna Systems","authors":"Seungnyun Kim;Moe Z. Win","doi":"10.1109/JSTSP.2025.3539208","DOIUrl":"https://doi.org/10.1109/JSTSP.2025.3539208","url":null,"abstract":"Recently, wideband beamforming realized by extremely large-scale antenna array (ELAA) systems have garnered significant interest as a means to dramatically improve throughput of next generation (xG) networks. However, traditional phase shifter (PS)-based beamforming schemes might not work well in wideband ELAA systems due to the beam squint effect, where beams at different frequencies become misaligned. While the use of true time delay (TTD) can mitigate the beam squint effect by generating frequency-dependent beamforming vectors, the conventional TTD-based beamforming schemes suffer from significant array gain loss caused by the discrepancies between the desired directional beams and the generated beams. This beam misalignment issue becomes even more pronounced in the wideband ELAA systems due to the nonlinear near-field characteristics described by the spherical wave propagation. In this paper, we propose a wideband dynamic array-of-subarrays (WDAoSA) architecture that dynamically configures connections between TTDs and PS subarrays using a switch network. By optimizing the subarray connections as well as the TTD time delays and PS phase shifts, WDAoSA can effectively maximize the array gain of wideband ELAA systems. Numerical results demonstrate that WDAoSA achieves significant improvements in terms of array gain and spectral efficiency over the conventional TTD-based beamforming schemes.","PeriodicalId":13038,"journal":{"name":"IEEE Journal of Selected Topics in Signal Processing","volume":"19 5","pages":"840-855"},"PeriodicalIF":13.7,"publicationDate":"2025-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145659230","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-03DOI: 10.1109/JSTSP.2025.3607338
{"title":"IEEE Signal Processing Society Publication Information","authors":"","doi":"10.1109/JSTSP.2025.3607338","DOIUrl":"https://doi.org/10.1109/JSTSP.2025.3607338","url":null,"abstract":"","PeriodicalId":13038,"journal":{"name":"IEEE Journal of Selected Topics in Signal Processing","volume":"19 5","pages":"C2-C2"},"PeriodicalIF":13.7,"publicationDate":"2025-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11275883","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145659234","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-03DOI: 10.1109/JSTSP.2025.3607336
{"title":"IEEE Signal Processing Society Information","authors":"","doi":"10.1109/JSTSP.2025.3607336","DOIUrl":"https://doi.org/10.1109/JSTSP.2025.3607336","url":null,"abstract":"","PeriodicalId":13038,"journal":{"name":"IEEE Journal of Selected Topics in Signal Processing","volume":"19 5","pages":"C3-C3"},"PeriodicalIF":13.7,"publicationDate":"2025-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11275990","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145659187","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}