首页 > 最新文献

IEEE transactions on neural networks and learning systems最新文献

英文 中文
SFedCA: Credit Assignment-Based Active Client Selection Strategy for Spiking Federated Learning. 基于信用分配的主动客户选择策略。
IF 8.9 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-09 DOI: 10.1109/TNNLS.2025.3639578
Qiugang Zhan, Jinbo Cao, Xiurui Xie, Huajin Tang, Malu Zhang, Shantian Yang, Guisong Liu

The spiking federated learning (FL) is an emerging distributed learning paradigm that allows resource-constrained devices to train collaboratively at low power consumption without exchanging local data. It takes advantage of both the privacy computation property in FL and the energy efficiency in spiking neural networks (SNNs). However, existing spiking FL methods employ a random selection approach for client aggregation, assuming unbiased client participation. This neglect of statistical heterogeneity significantly affects the convergence and precision of the global model. In this work, we propose a credit assignment-based active client selection strategy for spiking federated learning, the SFedCA, to aggregate clients contributing to the global sample distribution balance judiciously. Specifically, the client credits are assigned by the firing intensity state before and after local model training, which reflects the difference in local data distribution from the global model. The comprehensive experiments are conducted on various non-identical and independent distribution (non-IID) scenarios. The experimental results demonstrate that the SFedCA outperforms the existing state-of-the-art spiking FL methods and requires fewer communication rounds.

尖峰联邦学习(FL)是一种新兴的分布式学习范式,它允许资源受限的设备在不交换本地数据的情况下以低功耗进行协作训练。它同时利用了FL的隐私计算特性和峰值神经网络(snn)的能量效率。然而,现有的spiking FL方法采用随机选择方法进行客户端聚合,假设客户端参与无偏。这种对统计异质性的忽视严重影响了全球模型的收敛性和精度。在这项工作中,我们提出了一种基于信用分配的主动客户选择策略,用于峰值联邦学习(SFedCA),以明智地聚合对全球样本分布平衡做出贡献的客户。具体来说,通过局部模型训练前后的射击强度状态来分配客户端学分,这反映了局部数据分布与全局模型的差异。在各种非相同独立分布(non-IID)场景下进行了综合实验。实验结果表明,SFedCA比现有的最先进的尖峰FL方法要好,并且需要更少的通信轮数。
{"title":"SFedCA: Credit Assignment-Based Active Client Selection Strategy for Spiking Federated Learning.","authors":"Qiugang Zhan, Jinbo Cao, Xiurui Xie, Huajin Tang, Malu Zhang, Shantian Yang, Guisong Liu","doi":"10.1109/TNNLS.2025.3639578","DOIUrl":"https://doi.org/10.1109/TNNLS.2025.3639578","url":null,"abstract":"<p><p>The spiking federated learning (FL) is an emerging distributed learning paradigm that allows resource-constrained devices to train collaboratively at low power consumption without exchanging local data. It takes advantage of both the privacy computation property in FL and the energy efficiency in spiking neural networks (SNNs). However, existing spiking FL methods employ a random selection approach for client aggregation, assuming unbiased client participation. This neglect of statistical heterogeneity significantly affects the convergence and precision of the global model. In this work, we propose a credit assignment-based active client selection strategy for spiking federated learning, the SFedCA, to aggregate clients contributing to the global sample distribution balance judiciously. Specifically, the client credits are assigned by the firing intensity state before and after local model training, which reflects the difference in local data distribution from the global model. The comprehensive experiments are conducted on various non-identical and independent distribution (non-IID) scenarios. The experimental results demonstrate that the SFedCA outperforms the existing state-of-the-art spiking FL methods and requires fewer communication rounds.</p>","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"PP ","pages":""},"PeriodicalIF":8.9,"publicationDate":"2025-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145714236","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Rethinking Decoupled Knowledge Distillation: A Predictive Distribution Perspective. 从预测分布的角度重新思考解耦知识蒸馏。
IF 8.9 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-09 DOI: 10.1109/TNNLS.2025.3639562
Bowen Zheng, Ran Cheng

In the history of knowledge distillation (KD), the focus has once shifted over time from logit-based to feature-based approaches. However, this transition has been revisited with the advent of decoupled KD (DKD), which reemphasizes the importance of logit knowledge through advanced decoupling and weighting strategies. While DKD marks a significant advancement, its underlying mechanisms merit deeper exploration. As a response, we rethink DKD from a predictive distribution perspective. First, we introduce an enhanced version, the generalized DKD (GDKD) loss, which offers a more versatile method for decoupling logits. Then, we pay particular attention to the teacher model's predictive distribution and its impact on the gradients of GDKD loss, uncovering two critical insights often overlooked: 1) the partitioning by the top logit considerably improves the interrelationship of nontop logits and 2) amplifying the focus on the distillation loss of nontop logits enhances the knowledge extraction among them. Utilizing these insights, we further propose a streamlined GDKD algorithm with an efficient partition strategy to handle the multimodality of teacher models' predictive distribution. Our comprehensive experiments conducted on a variety of benchmarks, including CIFAR-100, ImageNet, Tiny-ImageNet, CUB-200-2011, and Cityscapes, demonstrate GDKD's superior performance over both the original DKD and other leading KD methods. The code is available at https://github.com/ZaberKo/GDKD.

在知识蒸馏(KD)的历史上,焦点曾经随着时间的推移从基于逻辑的方法转移到基于特征的方法。然而,随着解耦KD (DKD)的出现,这种转变被重新审视,DKD通过先进的解耦和加权策略重新强调了logit知识的重要性。虽然DKD标志着重大进步,但其潜在机制值得更深入的探索。作为回应,我们从预测分布的角度重新思考DKD。首先,我们引入了一个增强版本,即广义DKD (GDKD)损失,它为解耦逻辑提供了一种更通用的方法。然后,我们特别关注了教师模型的预测分布及其对GDKD损失梯度的影响,揭示了两个经常被忽视的关键见解:1)顶部logit的划分显著改善了非顶部logit的相互关系;2)放大对非顶部logit的蒸馏损失的关注,增强了它们之间的知识提取。利用这些见解,我们进一步提出了一种精简的GDKD算法,该算法具有有效的分区策略,可以处理教师模型预测分布的多模态。我们在各种基准测试(包括CIFAR-100、ImageNet、Tiny-ImageNet、CUB-200-2011和cityscape)上进行的综合实验表明,GDKD比原始的DKD和其他领先的KD方法都具有优越的性能。代码可在https://github.com/ZaberKo/GDKD上获得。
{"title":"Rethinking Decoupled Knowledge Distillation: A Predictive Distribution Perspective.","authors":"Bowen Zheng, Ran Cheng","doi":"10.1109/TNNLS.2025.3639562","DOIUrl":"https://doi.org/10.1109/TNNLS.2025.3639562","url":null,"abstract":"<p><p>In the history of knowledge distillation (KD), the focus has once shifted over time from logit-based to feature-based approaches. However, this transition has been revisited with the advent of decoupled KD (DKD), which reemphasizes the importance of logit knowledge through advanced decoupling and weighting strategies. While DKD marks a significant advancement, its underlying mechanisms merit deeper exploration. As a response, we rethink DKD from a predictive distribution perspective. First, we introduce an enhanced version, the generalized DKD (GDKD) loss, which offers a more versatile method for decoupling logits. Then, we pay particular attention to the teacher model's predictive distribution and its impact on the gradients of GDKD loss, uncovering two critical insights often overlooked: 1) the partitioning by the top logit considerably improves the interrelationship of nontop logits and 2) amplifying the focus on the distillation loss of nontop logits enhances the knowledge extraction among them. Utilizing these insights, we further propose a streamlined GDKD algorithm with an efficient partition strategy to handle the multimodality of teacher models' predictive distribution. Our comprehensive experiments conducted on a variety of benchmarks, including CIFAR-100, ImageNet, Tiny-ImageNet, CUB-200-2011, and Cityscapes, demonstrate GDKD's superior performance over both the original DKD and other leading KD methods. The code is available at https://github.com/ZaberKo/GDKD.</p>","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"PP ","pages":""},"PeriodicalIF":8.9,"publicationDate":"2025-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145714213","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Unfolding of Tail-Based Methods for Robust Sparse Recovery Under Noise and Model Mismatch. 噪声和模型失配下基于尾的鲁棒稀疏恢复方法的深度展开。
IF 10.4 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-09 DOI: 10.1109/tnnls.2025.3638635
Yhonatan Kvich,Pagoti Reshma,Pradyumna Pradhan,Ramunaidu Randhi,Yonina C Eldar
In this article, we introduce a deep unfolding framework for Tail-iterative soft thresholding algorithm (ISTA) and Tail-fast ISTA (FISTA), extending classical sparse recovery algorithms into learned architectures and improving upon existing unfolding techniques. By combining the interpretability of iterative solvers with the adaptability of model-based networks, our approach achieves efficient and robust recovery of sparse signals. Tail-based methods incorporate an iterative support estimation step, where the support and target estimations are refined alternately, providing a key advantage over traditional approaches. We integrate this into our architecture, enhancing both recovery performance and noise robustness. We compare the proposed methods against classical solvers, including FISTA and Tail-FISTA, as well as deep unfolding techniques, LISTA and DU-FISTA, across various sparsity levels, dynamic ranges (DRs), and both noiseless and noisy conditions. In noiseless cases, our methods achieve slightly lower performance than classical solvers but with significantly reduced computational costs. Under heavy noise and a high number of nonzero elements, where classical methods struggle, our learned approaches remain resilient and achieve improved recovery rates. To evaluate generalization, we also tested our methods on data generated with a perturbed sensing matrix. In this case, under noisy scenarios, our proposed methods outperform classical sparse recovery algorithms. The proposed framework is general and applies to any linear sparse recovery task in compressed sensing (CS), offering computational efficiency, robustness to noise, and adaptability to real-world data, showcasing the advantages of deep unfolding techniques with iterative support estimation.
在本文中,我们介绍了尾迭代软阈值算法(ISTA)和尾快速软阈值算法(FISTA)的深度展开框架,将经典稀疏恢复算法扩展到学习架构中,并改进了现有的展开技术。通过将迭代解算器的可解释性与基于模型的网络的适应性相结合,我们的方法实现了稀疏信号的高效鲁棒恢复。基于尾的方法包含一个迭代的支持估计步骤,其中支持和目标估计交替进行改进,提供了比传统方法更重要的优势。我们将其集成到我们的架构中,增强了恢复性能和噪声鲁棒性。我们在不同的稀疏度水平、动态范围(DRs)以及无噪声和有噪声条件下,将所提出的方法与经典求解方法(包括FISTA和Tail-FISTA)以及深度展开技术(LISTA和DU-FISTA)进行了比较。在无噪声情况下,我们的方法的性能略低于经典求解器,但显著降低了计算成本。在高噪声和大量非零元素的情况下,我们的学习方法仍然具有弹性,并且实现了更高的回收率。为了评估泛化,我们还在摄动传感矩阵生成的数据上测试了我们的方法。在这种情况下,在噪声情况下,我们提出的方法优于经典的稀疏恢复算法。所提出的框架是通用的,适用于压缩感知(CS)中的任何线性稀疏恢复任务,提供计算效率,对噪声的鲁棒性和对现实世界数据的适应性,展示了具有迭代支持估计的深度展开技术的优势。
{"title":"Deep Unfolding of Tail-Based Methods for Robust Sparse Recovery Under Noise and Model Mismatch.","authors":"Yhonatan Kvich,Pagoti Reshma,Pradyumna Pradhan,Ramunaidu Randhi,Yonina C Eldar","doi":"10.1109/tnnls.2025.3638635","DOIUrl":"https://doi.org/10.1109/tnnls.2025.3638635","url":null,"abstract":"In this article, we introduce a deep unfolding framework for Tail-iterative soft thresholding algorithm (ISTA) and Tail-fast ISTA (FISTA), extending classical sparse recovery algorithms into learned architectures and improving upon existing unfolding techniques. By combining the interpretability of iterative solvers with the adaptability of model-based networks, our approach achieves efficient and robust recovery of sparse signals. Tail-based methods incorporate an iterative support estimation step, where the support and target estimations are refined alternately, providing a key advantage over traditional approaches. We integrate this into our architecture, enhancing both recovery performance and noise robustness. We compare the proposed methods against classical solvers, including FISTA and Tail-FISTA, as well as deep unfolding techniques, LISTA and DU-FISTA, across various sparsity levels, dynamic ranges (DRs), and both noiseless and noisy conditions. In noiseless cases, our methods achieve slightly lower performance than classical solvers but with significantly reduced computational costs. Under heavy noise and a high number of nonzero elements, where classical methods struggle, our learned approaches remain resilient and achieve improved recovery rates. To evaluate generalization, we also tested our methods on data generated with a perturbed sensing matrix. In this case, under noisy scenarios, our proposed methods outperform classical sparse recovery algorithms. The proposed framework is general and applies to any linear sparse recovery task in compressed sensing (CS), offering computational efficiency, robustness to noise, and adaptability to real-world data, showcasing the advantages of deep unfolding techniques with iterative support estimation.","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"133 1","pages":""},"PeriodicalIF":10.4,"publicationDate":"2025-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145710805","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
2025 Index IEEE Transactions on Neural Networks and Learning Systems 神经网络与学习系统学报
IF 8.9 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-09 DOI: 10.1109/TNNLS.2025.3642587
{"title":"2025 Index IEEE Transactions on Neural Networks and Learning Systems","authors":"","doi":"10.1109/TNNLS.2025.3642587","DOIUrl":"10.1109/TNNLS.2025.3642587","url":null,"abstract":"","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"36 12","pages":"20470-20767"},"PeriodicalIF":8.9,"publicationDate":"2025-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11289848","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145717967","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hyperspectral Anomaly Detection via Hybrid Convolutional and Transformer-Based U-Net With Error Attention Mechanism 基于误差注意机制的混合卷积和基于变压器的U-Net高光谱异常检测
IF 10.4 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-08 DOI: 10.1109/tnnls.2025.3634765
Xiaoyi Wang, Peng Wang, Juan Cheng, Daiyin Zhu, Henry Leung, Paolo Gamba
{"title":"Hyperspectral Anomaly Detection via Hybrid Convolutional and Transformer-Based U-Net With Error Attention Mechanism","authors":"Xiaoyi Wang, Peng Wang, Juan Cheng, Daiyin Zhu, Henry Leung, Paolo Gamba","doi":"10.1109/tnnls.2025.3634765","DOIUrl":"https://doi.org/10.1109/tnnls.2025.3634765","url":null,"abstract":"","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"3 1","pages":""},"PeriodicalIF":10.4,"publicationDate":"2025-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145704009","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Systematic Review of Skeleton-Based Action Recognition: Methods, Challenges, and Future Directions 基于骨骼的动作识别系统综述:方法、挑战和未来方向
IF 10.4 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-08 DOI: 10.1109/tnnls.2025.3632689
Yi Liu, Ruyi Liu, Yuzhi Hu, Mengyao Wu, Wentian Xin, Qiguang Miao, Shuai Wu, Long Li
{"title":"A Systematic Review of Skeleton-Based Action Recognition: Methods, Challenges, and Future Directions","authors":"Yi Liu, Ruyi Liu, Yuzhi Hu, Mengyao Wu, Wentian Xin, Qiguang Miao, Shuai Wu, Long Li","doi":"10.1109/tnnls.2025.3632689","DOIUrl":"https://doi.org/10.1109/tnnls.2025.3632689","url":null,"abstract":"","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"5 1","pages":""},"PeriodicalIF":10.4,"publicationDate":"2025-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145704010","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Mutually Guided Fusion Learning for Collaborative Camouflaged Object Segmentation 协同伪装目标分割的相互引导融合学习
IF 10.4 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-05 DOI: 10.1109/tnnls.2025.3636523
Chen Li, Xiao Luan, Linghui Liu, Yanzhao Su, Yule Fu, Weisheng Li
{"title":"Mutually Guided Fusion Learning for Collaborative Camouflaged Object Segmentation","authors":"Chen Li, Xiao Luan, Linghui Liu, Yanzhao Su, Yule Fu, Weisheng Li","doi":"10.1109/tnnls.2025.3636523","DOIUrl":"https://doi.org/10.1109/tnnls.2025.3636523","url":null,"abstract":"","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"29 1","pages":""},"PeriodicalIF":10.4,"publicationDate":"2025-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145680397","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
P 3 L: Patent Prediction With Prompt Learning p3l:基于快速学习的专利预测
IF 10.4 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-04 DOI: 10.1109/tnnls.2025.3635881
Yi-Hong Lu, Pei-Yuan Lai, Man-Sheng Chen, Huan-Tao Cai, Zeng-Hui Wang, Shuang-Yin Liu, Qing-Yun Dai, Chang-Dong Wang
{"title":"P 3 L: Patent Prediction With Prompt Learning","authors":"Yi-Hong Lu, Pei-Yuan Lai, Man-Sheng Chen, Huan-Tao Cai, Zeng-Hui Wang, Shuang-Yin Liu, Qing-Yun Dai, Chang-Dong Wang","doi":"10.1109/tnnls.2025.3635881","DOIUrl":"https://doi.org/10.1109/tnnls.2025.3635881","url":null,"abstract":"","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"10 1","pages":""},"PeriodicalIF":10.4,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145673795","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LeapVAD: A Leap in Autonomous Driving via Cognitive Perception and Dual-Process Thinking LeapVAD:基于认知感知和双过程思维的自动驾驶飞跃
IF 10.4 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-04 DOI: 10.1109/tnnls.2025.3626711
Yukai Ma, Tiantian Wei, Naiting Zhong, Jianbiao Mei, Tao Hu, Licheng Wen, Xuemeng Yang, Botian Shi, Yong Liu
{"title":"LeapVAD: A Leap in Autonomous Driving via Cognitive Perception and Dual-Process Thinking","authors":"Yukai Ma, Tiantian Wei, Naiting Zhong, Jianbiao Mei, Tao Hu, Licheng Wen, Xuemeng Yang, Botian Shi, Yong Liu","doi":"10.1109/tnnls.2025.3626711","DOIUrl":"https://doi.org/10.1109/tnnls.2025.3626711","url":null,"abstract":"","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"5 1","pages":""},"PeriodicalIF":10.4,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145673794","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Efficient and Scalable Point Cloud Generation With Sparse Point-Voxel Diffusion Models 使用稀疏点体素扩散模型高效可扩展的点云生成
IF 10.4 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-04 DOI: 10.1109/tnnls.2025.3636409
Ioannis Romanelis, Vlassis Fotis, Athanasios Kalogeras, Christos Alexakos, Adrian Munteanu, Konstantinos Moustakas
{"title":"Efficient and Scalable Point Cloud Generation With Sparse Point-Voxel Diffusion Models","authors":"Ioannis Romanelis, Vlassis Fotis, Athanasios Kalogeras, Christos Alexakos, Adrian Munteanu, Konstantinos Moustakas","doi":"10.1109/tnnls.2025.3636409","DOIUrl":"https://doi.org/10.1109/tnnls.2025.3636409","url":null,"abstract":"","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"29 1","pages":""},"PeriodicalIF":10.4,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145673792","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE transactions on neural networks and learning systems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1