首页 > 最新文献

IEEE open journal of signal processing最新文献

英文 中文
Biorthogonal Lattice Tunable Wavelet Units and Their Implementation in Convolutional Neural Networks for Computer Vision Problems 双正交格子可调小波单元及其在卷积神经网络中的实现
IF 2.9 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-06-18 DOI: 10.1109/OJSP.2025.3580967
An D. Le;Shiwei Jin;Sungbal Seo;You-Suk Bae;Truong Q. Nguyen
This work introduces a universal wavelet unit constructed with a biorthogonal lattice structure which is a novel tunable wavelet unit to enhance image classification and anomaly detection in convolutional neural networks by reducing information loss during pooling. The unit employs a biorthogonal lattice structure to modify convolution, pooling, and down-sampling operations. Implemented in residual neural networks with 18 layers, it improved detection accuracy on CIFAR10 (by 2.67% ), ImageNet1K (by 1.85% ), and the Describable Textures dataset (by 11.81% ), showcasing its advantages in detecting detailed features. Similar gains are achieved in the implementations for residual neural networks with 34 layers and 50 layers. For anomaly detection on the MVTec Anomaly Detection and TUKPCB datasets, the proposed method achieved a competitive performance and better anomaly localization.
本文提出了一种基于双正交晶格结构的通用小波单元,它是一种新型的可调小波单元,通过减少池化过程中的信息丢失来增强卷积神经网络的图像分类和异常检测能力。该单元采用双正交晶格结构来修改卷积、池化和下采样操作。在18层残差神经网络中实现,在CIFAR10、ImageNet1K和descriable Textures数据集上的检测准确率分别提高了2.67%、1.85%和11.81%,显示了其在细节特征检测方面的优势。在34层和50层残差神经网络的实现中也获得了类似的增益。对于MVTec异常检测和TUKPCB数据集的异常检测,该方法取得了较好的性能和较好的异常定位效果。
{"title":"Biorthogonal Lattice Tunable Wavelet Units and Their Implementation in Convolutional Neural Networks for Computer Vision Problems","authors":"An D. Le;Shiwei Jin;Sungbal Seo;You-Suk Bae;Truong Q. Nguyen","doi":"10.1109/OJSP.2025.3580967","DOIUrl":"https://doi.org/10.1109/OJSP.2025.3580967","url":null,"abstract":"This work introduces a universal wavelet unit constructed with a biorthogonal lattice structure which is a novel tunable wavelet unit to enhance image classification and anomaly detection in convolutional neural networks by reducing information loss during pooling. The unit employs a biorthogonal lattice structure to modify convolution, pooling, and down-sampling operations. Implemented in residual neural networks with 18 layers, it improved detection accuracy on CIFAR10 (by 2.67% ), ImageNet1K (by 1.85% ), and the Describable Textures dataset (by 11.81% ), showcasing its advantages in detecting detailed features. Similar gains are achieved in the implementations for residual neural networks with 34 layers and 50 layers. For anomaly detection on the MVTec Anomaly Detection and TUKPCB datasets, the proposed method achieved a competitive performance and better anomaly localization.","PeriodicalId":73300,"journal":{"name":"IEEE open journal of signal processing","volume":"6 ","pages":"768-783"},"PeriodicalIF":2.9,"publicationDate":"2025-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11039659","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144634816","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
In-Scene Calibration of Poisson Noise Parameters for Phase Image Recovery 相位图像恢复泊松噪声参数的现场标定
IF 2.9 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-06-13 DOI: 10.1109/OJSP.2025.3579650
Achour Idoughi;Sreelakshmi Sreeharan;Chen Zhang;Joseph Raffoul;Hui Wang;Keigo Hirakawa
In sensor metrology, noise parameters governing the stochastic nature of photon detectors play critical role in characterizing the aleatoric uncertainty of computational imaging systems such as indirect time-of-flight cameras, structured light imaging, and division-of-time polarimetric imaging. Standard calibration procedures exists for extracting the noise parameters using calibration targets, but they are inconvenient or impractical for frequent updates. To keep up with noise parameters that are dynamically affected by sensor settings (e.g. exposure and gain) as well as environmental factors (e.g. temperature), we propose an In-Scene Calibration of Poisson Noise Parameters (ISC-PNP) method that does not require calibration targets. The main challenge lies in the heteroskedastic nature of the noise and the confounding influence of scene content. To address this, our method leverages global joint statistics of Poisson sensor data, which can be interpreted as a binomial random variable. We experimentally confirm that the noise parameters extracted by the proposed ISC-PNP and the standard calibration procedure are well-matched.
在传感器计量学中,控制光子探测器随机特性的噪声参数在表征计算成像系统的任意不确定性方面起着关键作用,例如间接飞行时间相机、结构光成像和时间分割偏振成像。现有的标准校准程序用于使用校准目标提取噪声参数,但它们不方便或不切实际,无法进行频繁的更新。为了跟上受传感器设置(例如曝光和增益)以及环境因素(例如温度)动态影响的噪声参数,我们提出了一种不需要校准目标的泊松噪声参数现场校准(ISC-PNP)方法。主要的挑战在于噪声的非均匀性和场景内容的混杂影响。为了解决这个问题,我们的方法利用泊松传感器数据的全局联合统计,可以将其解释为二项随机变量。实验结果表明,所提出的ISC-PNP提取的噪声参数与标准校准程序匹配良好。
{"title":"In-Scene Calibration of Poisson Noise Parameters for Phase Image Recovery","authors":"Achour Idoughi;Sreelakshmi Sreeharan;Chen Zhang;Joseph Raffoul;Hui Wang;Keigo Hirakawa","doi":"10.1109/OJSP.2025.3579650","DOIUrl":"https://doi.org/10.1109/OJSP.2025.3579650","url":null,"abstract":"In sensor metrology, noise parameters governing the stochastic nature of photon detectors play critical role in characterizing the aleatoric uncertainty of computational imaging systems such as indirect time-of-flight cameras, structured light imaging, and division-of-time polarimetric imaging. Standard calibration procedures exists for extracting the noise parameters using calibration targets, but they are inconvenient or impractical for frequent updates. To keep up with noise parameters that are dynamically affected by sensor settings (e.g. exposure and gain) as well as environmental factors (e.g. temperature), we propose an In-Scene Calibration of Poisson Noise Parameters (ISC-PNP) method that does not require calibration targets. The main challenge lies in the heteroskedastic nature of the noise and the confounding influence of scene content. To address this, our method leverages global joint statistics of Poisson sensor data, which can be interpreted as a binomial random variable. We experimentally confirm that the noise parameters extracted by the proposed ISC-PNP and the standard calibration procedure are well-matched.","PeriodicalId":73300,"journal":{"name":"IEEE open journal of signal processing","volume":"6 ","pages":"682-690"},"PeriodicalIF":2.9,"publicationDate":"2025-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11034763","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144511201","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Continuous Relaxation of Discontinuous Shrinkage Operator: Proximal Inclusion and Conversion 不连续收缩算子的连续松弛:近端包合与转换
IF 2.9 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-06-13 DOI: 10.1109/OJSP.2025.3579646
Masahiro Yukawa
We present a principled way of deriving a continuous relaxation of a given discontinuous shrinkage operator, which is based on two fundamental results, proximal inclusion and conversion. Using our results, the discontinuous operator is converted, via double inversion, to a continuous operator; more precisely, the associated “set-valued” operator is converted to a “single-valued” Lipschitz continuous operator. The first illustrative example is the firm shrinkage operator which can be derived as a continuous relaxation of the hard shrinkage operator. We also derive a new operator as a continuous relaxation of the discontinuous shrinkage operator associated with the so-called reverse ordered weighted $ell _{1}$ (ROWL) penalty. Numerical examples demonstrate potential advantages of the continuous relaxation.
我们提出了一个原则性的方法来推导一个给定的不连续收缩算子的连续松弛,这是基于两个基本结果,近端包含和转换。利用我们的结果,通过双反演,不连续算子被转换为连续算子;更准确地说,相关的“集值”算子被转换为“单值”Lipschitz连续算子。第一个说明性的例子是坚固收缩算子,它可以作为硬收缩算子的连续松弛而导出。我们还推导了一个新的算子,作为与所谓的反向有序加权$ well _{1}$ (ROWL)惩罚相关的不连续收缩算子的连续松弛。数值算例表明了连续松弛的潜在优势。
{"title":"Continuous Relaxation of Discontinuous Shrinkage Operator: Proximal Inclusion and Conversion","authors":"Masahiro Yukawa","doi":"10.1109/OJSP.2025.3579646","DOIUrl":"https://doi.org/10.1109/OJSP.2025.3579646","url":null,"abstract":"We present a principled way of deriving a continuous relaxation of a given discontinuous shrinkage operator, which is based on two fundamental results, proximal inclusion and conversion. Using our results, the discontinuous operator is converted, via double inversion, to a continuous operator; more precisely, the associated “set-valued” operator is converted to a “single-valued” Lipschitz continuous operator. The first illustrative example is the firm shrinkage operator which can be derived as a continuous relaxation of the hard shrinkage operator. We also derive a new operator as a continuous relaxation of the discontinuous shrinkage operator associated with the so-called reverse ordered weighted <inline-formula><tex-math>$ell _{1}$</tex-math></inline-formula> (ROWL) penalty. Numerical examples demonstrate potential advantages of the continuous relaxation.","PeriodicalId":73300,"journal":{"name":"IEEE open journal of signal processing","volume":"6 ","pages":"753-767"},"PeriodicalIF":2.9,"publicationDate":"2025-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11034740","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144581587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
On Conditional Independence Graph Learning From Multi-Attribute Gaussian Dependent Time Series 多属性高斯相关时间序列的条件独立图学习
IF 2.9 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-06-11 DOI: 10.1109/OJSP.2025.3578807
Jitendra K. Tugnait
Estimation of the conditional independence graph (CIG) of high-dimensional multivariate Gaussian time series from multi-attribute data is considered. Existing methods for graph estimation for such data are based on single-attribute models where one associates a scalar time series with each node. In multi-attribute graphical models, each node represents a random vector or vector time series. In this paper we provide a unified theoretical analysis of multi-attribute graph learning for dependent time series using a penalized log-likelihood objective function formulated in the frequency domain using the discrete Fourier transform of the time-domain data. We consider both convex (sparse-group lasso) and non-convex (log-sum and SCAD group penalties) penalty/regularization functions. We establish sufficient conditions in a high-dimensional setting for consistency (convergence of the inverse power spectral density to true value in the Frobenius norm), local convexity when using non-convex penalties, and graph recovery. We do not impose any incoherence or irrepresentability condition for our convergence results. We also empirically investigate selection of the tuning parameters based on the Bayesian information criterion, and illustrate our approach using numerical examples utilizing both synthetic and real data.
研究了基于多属性数据的高维多元高斯时间序列的条件独立图估计问题。现有的图估计方法基于单属性模型,其中每个节点关联一个标量时间序列。在多属性图形模型中,每个节点表示一个随机向量或向量时间序列。在本文中,我们提供了一个统一的理论分析的多属性图学习的依赖时间序列使用惩罚对数似然目标函数在频域利用离散傅里叶变换的时域数据。我们同时考虑凸(稀疏群套索)和非凸(对数和和SCAD群惩罚)惩罚/正则化函数。我们在高维环境中建立了一致性(逆功率谱密度在Frobenius范数中收敛到真值)、使用非凸惩罚时的局部凸性和图恢复的充分条件。我们没有对我们的收敛结果施加任何非相干性或不可表示性条件。我们还对基于贝叶斯信息准则的调优参数的选择进行了实证研究,并利用合成数据和实际数据用数值例子说明了我们的方法。
{"title":"On Conditional Independence Graph Learning From Multi-Attribute Gaussian Dependent Time Series","authors":"Jitendra K. Tugnait","doi":"10.1109/OJSP.2025.3578807","DOIUrl":"https://doi.org/10.1109/OJSP.2025.3578807","url":null,"abstract":"Estimation of the conditional independence graph (CIG) of high-dimensional multivariate Gaussian time series from multi-attribute data is considered. Existing methods for graph estimation for such data are based on single-attribute models where one associates a scalar time series with each node. In multi-attribute graphical models, each node represents a random vector or vector time series. In this paper we provide a unified theoretical analysis of multi-attribute graph learning for dependent time series using a penalized log-likelihood objective function formulated in the frequency domain using the discrete Fourier transform of the time-domain data. We consider both convex (sparse-group lasso) and non-convex (log-sum and SCAD group penalties) penalty/regularization functions. We establish sufficient conditions in a high-dimensional setting for consistency (convergence of the inverse power spectral density to true value in the Frobenius norm), local convexity when using non-convex penalties, and graph recovery. We do not impose any incoherence or irrepresentability condition for our convergence results. We also empirically investigate selection of the tuning parameters based on the Bayesian information criterion, and illustrate our approach using numerical examples utilizing both synthetic and real data.","PeriodicalId":73300,"journal":{"name":"IEEE open journal of signal processing","volume":"6 ","pages":"705-721"},"PeriodicalIF":2.9,"publicationDate":"2025-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11030300","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144481841","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Random Matrix Theory Predictions of Dominant Mode Rejection SINR Loss due to Signal in the Training Data 随机矩阵理论预测训练数据中信号导致的优势模抑制SINR损失
IF 2.9 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-06-11 DOI: 10.1109/OJSP.2025.3578812
Christopher C. Hulbert;Kathleen E. Wage
Detection and estimation performance depends on signal-to-interference-plus-noise ratio (SINR) at the output of an array. The Capon beamformer (BF) designed with ensemble statistics achieves the optimum SINR in stationary environments. Adaptive BFs compute their weights using the sample covariance matrix (SCM) obtained from snapshots, i.e., training samples. SINR loss, the ratio of adaptive to optimal SINR, quantifies the number of snapshots required to achieve a desired average level of performance. For adaptive Capon BFs that invert the full SCM, Reed et al. derived the SINR loss distribution and Miller quantified how the desired signal’s presence in the snapshots degrades that loss. Abraham and Owsley designed dominant mode rejection (DMR) for cases where the number of snapshots is less than or approximately equal to the number of sensors. DMR’s success in snapshot-starved passive sonar scenarios led to its application in other areas such as hyperspectral sensing and medical imaging. DMR forms a modified SCM as a weighted combination of the identity matrix and the dominant eigensubspace containing the loud interferers, thereby eliminating the inverse of the poorly estimated noise subspace. This work leverages recent random matrix theory (RMT) results to develop DMR performance predictions under the assumption that the desired signal is contained in the training data. Using white noise gain and interference suppression predictions, the paper derives a lower bound on DMR’s average SINR loss and confirms its accuracy using Monte Carlo simulations. Moreover, this paper creates a new eigensubspace leakage estimator applicable to broader RMT applications.
检测和估计性能取决于阵列输出端的信噪比(SINR)。采用综统计设计的Capon波束形成器(BF)在静态环境下实现了最优信噪比。自适应bf使用从快照(即训练样本)获得的样本协方差矩阵(SCM)计算其权重。SINR损耗,即自适应SINR与最优SINR的比值,量化了实现期望的平均性能水平所需的快照数量。对于反转整个SCM的自适应Capon BFs, Reed等人推导了SINR损耗分布,Miller量化了期望信号在快照中的存在如何降低该损耗。Abraham和Owsley为快照数量小于或近似等于传感器数量的情况设计了主导模式抑制(DMR)。DMR在缺乏快照的被动声纳场景中的成功,导致其在其他领域的应用,如高光谱传感和医学成像。DMR形成一个修正的SCM作为单位矩阵和包含噪声干扰的显性特征子空间的加权组合,从而消除了估计差的噪声子空间的逆。这项工作利用最近的随机矩阵理论(RMT)结果来开发DMR性能预测,假设所需的信号包含在训练数据中。利用白噪声增益和干扰抑制预测,导出了DMR平均信噪比损失的下界,并通过蒙特卡罗模拟验证了其准确性。此外,本文还建立了一种新的特征子空间泄漏估计量,适用于更广泛的RMT应用。
{"title":"Random Matrix Theory Predictions of Dominant Mode Rejection SINR Loss due to Signal in the Training Data","authors":"Christopher C. Hulbert;Kathleen E. Wage","doi":"10.1109/OJSP.2025.3578812","DOIUrl":"https://doi.org/10.1109/OJSP.2025.3578812","url":null,"abstract":"Detection and estimation performance depends on signal-to-interference-plus-noise ratio (SINR) at the output of an array. The Capon beamformer (BF) designed with ensemble statistics achieves the optimum SINR in stationary environments. Adaptive BFs compute their weights using the sample covariance matrix (SCM) obtained from snapshots, i.e., training samples. SINR loss, the ratio of adaptive to optimal SINR, quantifies the number of snapshots required to achieve a desired average level of performance. For adaptive Capon BFs that invert the full SCM, Reed et al. derived the SINR loss distribution and Miller quantified how the desired signal’s presence in the snapshots degrades that loss. Abraham and Owsley designed dominant mode rejection (DMR) for cases where the number of snapshots is less than or approximately equal to the number of sensors. DMR’s success in snapshot-starved passive sonar scenarios led to its application in other areas such as hyperspectral sensing and medical imaging. DMR forms a modified SCM as a weighted combination of the identity matrix and the dominant eigensubspace containing the loud interferers, thereby eliminating the inverse of the poorly estimated noise subspace. This work leverages recent random matrix theory (RMT) results to develop DMR performance predictions under the assumption that the desired signal is contained in the training data. Using white noise gain and interference suppression predictions, the paper derives a lower bound on DMR’s average SINR loss and confirms its accuracy using Monte Carlo simulations. Moreover, this paper creates a new eigensubspace leakage estimator applicable to broader RMT applications.","PeriodicalId":73300,"journal":{"name":"IEEE open journal of signal processing","volume":"6 ","pages":"735-752"},"PeriodicalIF":2.9,"publicationDate":"2025-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11030297","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144550496","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The First Cadenza Challenges: Using Machine Learning Competitions to Improve Music for Listeners With a Hearing Loss 第一个华彩挑战:使用机器学习比赛来改善听力损失听众的音乐
IF 2.9 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-06-10 DOI: 10.1109/OJSP.2025.3578299
Gerardo Roa-Dabike;Michael A. Akeroyd;Scott Bannister;Jon P. Barker;Trevor J. Cox;Bruno Fazenda;Jennifer Firth;Simone Graetzer;Alinka Greasley;Rebecca R. Vos;William M. Whitmer
Listening to music can be an issue for those with a hearing impairment, and hearing aids are not a universal solution. This paper details the first use of an open challenge methodology to improve the audio quality of music for those with hearing loss through machine learning. The first challenge (CAD1) had 9 participants. The second was a 2024 ICASSP grand challenge (ICASSP24), which attracted 17 entrants. The challenge tasks concerned demixing and remixing pop/rock music to allow a personalized rebalancing of the instruments in the mix, along with amplification to correct for raised hearing thresholds. The software baselines provided for entrants to build upon used two state-of-the-art demix algorithms: Hybrid Demucs and Open-Unmix. Objective evaluation used HAAQI, the Hearing-Aid Audio Quality Index. No entries improved on the best baseline in CAD1. It is suggested that this arose because demixing algorithms are relatively mature, and recent work has shown that access to large (private) datasets is needed to further improve performance. Learning from this, for ICASSP24 the scenario was made more difficult by using loudspeaker reproduction and specifying gains to be applied before remixing. This also made the scenario more useful for listening through hearing aids. Nine entrants scored better than the best ICASSP24 baseline. Most of the entrants used a refined version of Hybrid Demucs and NAL-R amplification. The highest scoring system combined the outputs of several demixing algorithms in an ensemble approach. These challenges are now open benchmarks for future research with freely available software and data.
对于听力受损的人来说,听音乐可能是个问题,助听器并不是万能的解决方案。本文详细介绍了首次使用开放式挑战方法,通过机器学习为听力损失的人提高音乐的音频质量。第一个挑战(CAD1)有9名参与者。第二次是2024年ICASSP大挑战(ICASSP24),吸引了17名参赛者。挑战任务涉及对流行/摇滚音乐进行解混音和重混音,以允许在混音中对乐器进行个性化的再平衡,同时使用扩音来纠正听力阈值的提高。为参赛者提供的软件基线使用了两种最先进的分解算法:Hybrid demus和Open-Unmix。客观评价采用助听器音质指数HAAQI。在CAD1的最佳基线上没有条目改善。有人认为,这是因为去混算法相对成熟,最近的工作表明,需要访问大型(私有)数据集来进一步提高性能。从中吸取教训,对于ICASSP24来说,通过使用扬声器再现和在混音之前指定要应用的增益,这种情况变得更加困难。这也使得这种情况对通过助听器进行听力更有用。9名参赛者的得分高于ICASSP24的最佳基线。大多数参赛者使用的是改良版的Hybrid Demucs和NAL-R放大器。得分最高的系统在集成方法中结合了几种解混算法的输出。这些挑战现在是开放的基准,为未来的研究与免费提供的软件和数据。
{"title":"The First Cadenza Challenges: Using Machine Learning Competitions to Improve Music for Listeners With a Hearing Loss","authors":"Gerardo Roa-Dabike;Michael A. Akeroyd;Scott Bannister;Jon P. Barker;Trevor J. Cox;Bruno Fazenda;Jennifer Firth;Simone Graetzer;Alinka Greasley;Rebecca R. Vos;William M. Whitmer","doi":"10.1109/OJSP.2025.3578299","DOIUrl":"https://doi.org/10.1109/OJSP.2025.3578299","url":null,"abstract":"Listening to music can be an issue for those with a hearing impairment, and hearing aids are not a universal solution. This paper details the first use of an open challenge methodology to improve the audio quality of music for those with hearing loss through machine learning. The first challenge (CAD1) had 9 participants. The second was a 2024 ICASSP grand challenge (ICASSP24), which attracted 17 entrants. The challenge tasks concerned demixing and remixing pop/rock music to allow a personalized rebalancing of the instruments in the mix, along with amplification to correct for raised hearing thresholds. The software baselines provided for entrants to build upon used two state-of-the-art demix algorithms: Hybrid Demucs and Open-Unmix. Objective evaluation used HAAQI, the Hearing-Aid Audio Quality Index. No entries improved on the best baseline in CAD1. It is suggested that this arose because demixing algorithms are relatively mature, and recent work has shown that access to large (private) datasets is needed to further improve performance. Learning from this, for ICASSP24 the scenario was made more difficult by using loudspeaker reproduction and specifying gains to be applied before remixing. This also made the scenario more useful for listening through hearing aids. Nine entrants scored better than the best ICASSP24 baseline. Most of the entrants used a refined version of Hybrid Demucs and NAL-R amplification. The highest scoring system combined the outputs of several demixing algorithms in an ensemble approach. These challenges are now open benchmarks for future research with freely available software and data.","PeriodicalId":73300,"journal":{"name":"IEEE open journal of signal processing","volume":"6 ","pages":"722-734"},"PeriodicalIF":2.9,"publicationDate":"2025-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11030066","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144536564","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AVCaps: An Audio-Visual Dataset With Modality-Specific Captions AVCaps:具有模态特定标题的视听数据集
IF 2.9 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-06-09 DOI: 10.1109/OJSP.2025.3578296
Parthasaarathy Sudarsanam;Irene Martín-Morató;Aapo Hakala;Tuomas Virtanen
This paper introduces AVCaps, an audio-visual dataset that contains separate textual captions for the audio, visual, and audio-visual contents of video clips. The dataset contains 2061 video clips constituting a total of 28.8 hours. We provide up to 5 captions for the audio, visual, and audio-visual content of each clip, crowdsourced separately. Existing datasets focus on a single modality or do not provide modality-specific captions, limiting the study of how each modality contributes to overall comprehension in multimodal settings. Our dataset addresses this critical gap in multimodal research by offering a resource for studying how audio and visual content are captioned individually, as well as how audio-visual content is captioned in relation to these individual modalities. Crowdsourced audio-visual captions are prone to favor visual content over audio content. To avoid this we use large language models (LLMs) to generate three balanced audio-visual captions for each clip based on the crowdsourced captions. We present captioning and retrieval experiments to illustrate the effectiveness of modality-specific captions in evaluating model performance. Specifically, we show that the modality-specific captions allow us to quantitatively assess how well a model understands audio and visual information from a given video. Notably, we find that a model trained on the balanced LLM-generated audio-visual captions captures audio information more effectively compared to a model trained on crowdsourced audio-visual captions. This model achieves a 14% higher Sentence-BERT similarity on crowdsourced audio captions compared to a model trained on crowdsourced audio-visual captions, which are typically more biased towards visual information. We also discuss the possibilities in multimodal representation learning, question answering, developing new video captioning metrics, and generative AI that this dataset unlocks. The dataset is available publicly at Zenodo and Hugging Face.
本文介绍了AVCaps,这是一个视听数据集,它包含视频剪辑的音频、视觉和视听内容的单独文本标题。该数据集包含2061个视频片段,总计28.8小时。我们为每个片段的音频、视频和视听内容提供最多5个字幕,分别众包。现有的数据集中在单一的模态上,或者没有提供特定于模态的说明,这限制了对多模态环境中每种模态如何有助于整体理解的研究。我们的数据集解决了多模态研究中的这一关键空白,提供了一个资源来研究音频和视觉内容如何单独添加字幕,以及视听内容如何与这些单独的模态相关。众包视听字幕更倾向于视觉内容而非音频内容。为了避免这种情况,我们使用大型语言模型(llm)基于众包字幕为每个片段生成三个平衡的视听字幕。我们提出了标题和检索实验,以说明模式特定的标题在评估模型性能方面的有效性。具体来说,我们表明,特定于模态的字幕允许我们定量地评估模型对给定视频的音频和视觉信息的理解程度。值得注意的是,我们发现在平衡的llm生成的视听字幕上训练的模型比在众包视听字幕上训练的模型更有效地捕获音频信息。该模型在众包音频字幕上的句子- bert相似度比在众包视听字幕上训练的模型高14%,后者通常更倾向于视觉信息。我们还讨论了多模态表示学习、问题回答、开发新的视频字幕指标以及该数据集解锁的生成式人工智能的可能性。该数据集可以在Zenodo和hug Face上公开获取。
{"title":"AVCaps: An Audio-Visual Dataset With Modality-Specific Captions","authors":"Parthasaarathy Sudarsanam;Irene Martín-Morató;Aapo Hakala;Tuomas Virtanen","doi":"10.1109/OJSP.2025.3578296","DOIUrl":"https://doi.org/10.1109/OJSP.2025.3578296","url":null,"abstract":"This paper introduces AVCaps, an audio-visual dataset that contains separate textual captions for the audio, visual, and audio-visual contents of video clips. The dataset contains 2061 video clips constituting a total of 28.8 hours. We provide up to 5 captions for the audio, visual, and audio-visual content of each clip, crowdsourced separately. Existing datasets focus on a single modality or do not provide modality-specific captions, limiting the study of how each modality contributes to overall comprehension in multimodal settings. Our dataset addresses this critical gap in multimodal research by offering a resource for studying how audio and visual content are captioned individually, as well as how audio-visual content is captioned in relation to these individual modalities. Crowdsourced audio-visual captions are prone to favor visual content over audio content. To avoid this we use large language models (LLMs) to generate three balanced audio-visual captions for each clip based on the crowdsourced captions. We present captioning and retrieval experiments to illustrate the effectiveness of modality-specific captions in evaluating model performance. Specifically, we show that the modality-specific captions allow us to quantitatively assess how well a model understands audio and visual information from a given video. Notably, we find that a model trained on the balanced LLM-generated audio-visual captions captures audio information more effectively compared to a model trained on crowdsourced audio-visual captions. This model achieves a 14% higher Sentence-BERT similarity on crowdsourced audio captions compared to a model trained on crowdsourced audio-visual captions, which are typically more biased towards visual information. We also discuss the possibilities in multimodal representation learning, question answering, developing new video captioning metrics, and generative AI that this dataset unlocks. The dataset is available publicly at Zenodo and Hugging Face.","PeriodicalId":73300,"journal":{"name":"IEEE open journal of signal processing","volume":"6 ","pages":"691-704"},"PeriodicalIF":2.9,"publicationDate":"2025-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11029114","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144511206","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Federated Learning With Automated Dual-Level Hyperparameter Tuning 自动双级超参数调优的联邦学习
IF 2.9 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-06-09 DOI: 10.1109/OJSP.2025.3578273
Rakib Ul Haque;Panagiotis Markopoulos
Federated Learning (FL) is a decentralized machine learning (ML) approach where multiple clients collaboratively train a shared model over several update rounds without exchanging local data. Similar to centralized learning, determining hyperparameters (HPs) like learning rate and batch size remains challenging yet critical for model performance. Current adaptive HP-tuning methods are often domain-specific and heavily influenced by initialization. Moreover, model accuracy often improves slowly, requiring many update rounds. This slow improvement is particularly problematic for FL, where each update round incurs high communication costs in addition to computation and energy costs. In this work, we introduce FLAUTO, the first method to perform dynamic HP-tuning simultaneously at both local (client) and global (server) levels. This dual-level adaptation directly addresses critical bottlenecks in FL, including slow convergence, client heterogeneity, and high communication costs, distinguishing it from existing approaches. FLAUTO leverages training loss and relative local model deviation as novel metrics, enabling robust and dynamic hyperparameter adjustments without reliance on initial guesses. By prioritizing high performance in early update rounds, FLAUTO significantly reduces communication and energy overhead—key challenges in FL deployments. Comprehensive experimental studies on image classification and object detection tasks demonstrate that FLAUTO consistently outperforms state-of-the-art methods, establishing its efficacy and broad applicability.
联邦学习(FL)是一种分散的机器学习(ML)方法,其中多个客户端在几轮更新中协作训练共享模型,而无需交换本地数据。与集中式学习类似,确定学习率和批处理大小等超参数(HPs)仍然具有挑战性,但对模型性能至关重要。当前的自适应hp调优方法通常是特定于领域的,并且受到初始化的严重影响。此外,模型精度通常提高缓慢,需要多次更新。对于FL来说,这种缓慢的改进尤其成问题,因为除了计算和能源成本之外,每个更新轮都会产生很高的通信成本。在这项工作中,我们介绍了FLAUTO,这是第一种在本地(客户端)和全局(服务器)级别同时执行动态hp调优的方法。这种双级适应直接解决了FL中的关键瓶颈,包括缓慢的收敛、客户端异构性和高通信成本,将其与现有方法区分开来。FLAUTO利用训练损失和相对局部模型偏差作为新指标,实现鲁棒和动态超参数调整,而不依赖于初始猜测。通过在早期更新中优先考虑高性能,FLAUTO显著降低了通信和能源开销——这是FL部署中的关键挑战。对图像分类和目标检测任务的综合实验研究表明,FLAUTO始终优于最先进的方法,建立了其有效性和广泛的适用性。
{"title":"Federated Learning With Automated Dual-Level Hyperparameter Tuning","authors":"Rakib Ul Haque;Panagiotis Markopoulos","doi":"10.1109/OJSP.2025.3578273","DOIUrl":"https://doi.org/10.1109/OJSP.2025.3578273","url":null,"abstract":"Federated Learning (FL) is a decentralized machine learning (ML) approach where multiple clients collaboratively train a shared model over several update rounds without exchanging local data. Similar to centralized learning, determining hyperparameters (HPs) like learning rate and batch size remains challenging yet critical for model performance. Current adaptive HP-tuning methods are often domain-specific and heavily influenced by initialization. Moreover, model accuracy often improves slowly, requiring many update rounds. This slow improvement is particularly problematic for FL, where each update round incurs high communication costs in addition to computation and energy costs. In this work, we introduce FLAUTO, the first method to perform dynamic HP-tuning simultaneously at both local (client) and global (server) levels. This dual-level adaptation directly addresses critical bottlenecks in FL, including slow convergence, client heterogeneity, and high communication costs, distinguishing it from existing approaches. FLAUTO leverages training loss and relative local model deviation as novel metrics, enabling robust and dynamic hyperparameter adjustments without reliance on initial guesses. By prioritizing high performance in early update rounds, FLAUTO significantly reduces communication and energy overhead—key challenges in FL deployments. Comprehensive experimental studies on image classification and object detection tasks demonstrate that FLAUTO consistently outperforms state-of-the-art methods, establishing its efficacy and broad applicability.","PeriodicalId":73300,"journal":{"name":"IEEE open journal of signal processing","volume":"6 ","pages":"795-802"},"PeriodicalIF":2.9,"publicationDate":"2025-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11029096","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144634874","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unsupervised Action Anticipation Through Action Cluster Prediction 通过动作聚类预测的无监督动作预测
IF 2.9 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-06-09 DOI: 10.1109/OJSP.2025.3578300
Jiuxu Chen;Nupur Thakur;Sachin Chhabra;Baoxin Li
Predicting near-future human actions in videos has become a focal point of research, driven by applications such as human-helping robotics, collaborative AI services, and surveillance video analysis. However, the inherent challenge lies in deciphering the complex spatial-temporal dynamics inherent in typical video feeds. While existing works excel in constrained settings with fine-grained action ground-truth labels, the general unavailability of such labeling at the frame level poses a significant hurdle. In this paper, we present an innovative solution to anticipate future human actions without relying on any form of supervision. Our approach involves generating pseudo-labels for video frames through the clustering of frame-wise visual features. These pseudo-labels are then input into a temporal sequence modeling module that learns to predict future actions in terms of pseudo-labels. Apart from the action anticipation method, we propose an innovative evaluation scheme GreedyMapper, a unique many-to-one mapping scheme that provides a practical solution to the many-to-one mapping challenge, a task that existing mapping algorithms struggle to address. Through comprehensive experimentation conducted on demanding real-world cooking datasets, our unsupervised method demonstrates superior performance compared to weakly-supervised approaches by a significant margin on the 50Salads dataset. When applied to the Breakfast dataset, our approach yields strong performance compared to the baselines in an unsupervised setting and delivers competitive results to (weakly) supervised methods under a similar setting.
在人类帮助机器人、协作人工智能服务和监控视频分析等应用的推动下,预测视频中不久的未来人类行为已经成为研究的焦点。然而,固有的挑战在于解密典型视频馈送中固有的复杂时空动态。虽然现有的作品在具有细粒度动作基本事实标签的约束设置中表现出色,但在框架级别上这种标签的普遍不可用性构成了一个重大障碍。在本文中,我们提出了一种创新的解决方案,可以在不依赖任何形式的监督的情况下预测未来的人类行为。我们的方法包括通过聚类逐帧视觉特征为视频帧生成伪标签。然后将这些伪标签输入到时间序列建模模块中,该模块学习根据伪标签预测未来的动作。除了动作预测方法,我们还提出了一种创新的评估方案GreedyMapper,这是一种独特的多对一映射方案,为多对一映射挑战提供了实用的解决方案,这是现有映射算法难以解决的任务。通过对真实世界烹饪数据集进行的综合实验,我们的无监督方法在50salad数据集上比弱监督方法表现出更好的性能。当应用于早餐数据集时,与无监督设置的基线相比,我们的方法产生了强大的性能,并且在类似设置下提供了与(弱)监督方法相竞争的结果。
{"title":"Unsupervised Action Anticipation Through Action Cluster Prediction","authors":"Jiuxu Chen;Nupur Thakur;Sachin Chhabra;Baoxin Li","doi":"10.1109/OJSP.2025.3578300","DOIUrl":"https://doi.org/10.1109/OJSP.2025.3578300","url":null,"abstract":"Predicting near-future human actions in videos has become a focal point of research, driven by applications such as human-helping robotics, collaborative AI services, and surveillance video analysis. However, the inherent challenge lies in deciphering the complex spatial-temporal dynamics inherent in typical video feeds. While existing works excel in constrained settings with fine-grained action ground-truth labels, the general unavailability of such labeling at the frame level poses a significant hurdle. In this paper, we present an innovative solution to anticipate future human actions without relying on any form of supervision. Our approach involves generating pseudo-labels for video frames through the clustering of frame-wise visual features. These pseudo-labels are then input into a temporal sequence modeling module that learns to predict future actions in terms of pseudo-labels. Apart from the action anticipation method, we propose an innovative evaluation scheme GreedyMapper, a unique many-to-one mapping scheme that provides a practical solution to the many-to-one mapping challenge, a task that existing mapping algorithms struggle to address. Through comprehensive experimentation conducted on demanding real-world cooking datasets, our unsupervised method demonstrates superior performance compared to weakly-supervised approaches by a significant margin on the 50Salads dataset. When applied to the Breakfast dataset, our approach yields strong performance compared to the baselines in an unsupervised setting and delivers competitive results to (weakly) supervised methods under a similar setting.","PeriodicalId":73300,"journal":{"name":"IEEE open journal of signal processing","volume":"6 ","pages":"641-650"},"PeriodicalIF":2.9,"publicationDate":"2025-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11029147","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144366940","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multidimensional Polynomial Phase Estimation 多维多项式相位估计
IF 2.9 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-06-06 DOI: 10.1109/OJSP.2025.3577503
Heedong Do;Namyoon Lee;Angel Lozano
An estimation method is presented for polynomial phase signals, i.e., those adopting the form of a complex exponential whose phase is polynomial in its indices. Transcending the scope of existing techniques, the proposed estimator can handle an arbitrary number of dimensions and an arbitrary set of polynomial degrees along each dimension; the only requirement is that the number of observations per dimension exceeds the highest degree thereon. Embodied by a highly compact sequential algorithm, this estimator is efficient at high signal-to-noise ratios (SNRs), exhibiting a computational complexity that is strictly linear in the number of observations and at most quadratic in the number of polynomial terms. To reinforce the performance at low and medium SNRs, where any phase estimator is bound to be hampered by the inherent ambiguity caused by phase wrappings, suitable functionalities are incorporated and shown to be highly effective.
提出了一种多项式相位信号的估计方法,即采用复指数形式的信号,其相位在其指标中为多项式。该估计器超越了现有技术的范围,可以处理任意数量的维数和沿每个维的任意多项式度集;唯一的要求是每个维度的观测数超过其最高度。通过高度紧凑的顺序算法,该估计器在高信噪比(SNRs)下有效,显示出在观测数量上严格线性的计算复杂性,并且在多项式项的数量上最多是二次的。为了加强在低信噪比和中等信噪比下的性能,任何相位估计器都必然受到相位包裹引起的固有模糊性的阻碍,我们纳入了合适的功能,并证明了它是非常有效的。
{"title":"Multidimensional Polynomial Phase Estimation","authors":"Heedong Do;Namyoon Lee;Angel Lozano","doi":"10.1109/OJSP.2025.3577503","DOIUrl":"https://doi.org/10.1109/OJSP.2025.3577503","url":null,"abstract":"An estimation method is presented for polynomial phase signals, i.e., those adopting the form of a complex exponential whose phase is polynomial in its indices. Transcending the scope of existing techniques, the proposed estimator can handle an arbitrary number of dimensions and an arbitrary set of polynomial degrees along each dimension; the only requirement is that the number of observations per dimension exceeds the highest degree thereon. Embodied by a highly compact sequential algorithm, this estimator is efficient at high signal-to-noise ratios (SNRs), exhibiting a computational complexity that is strictly linear in the number of observations and at most quadratic in the number of polynomial terms. To reinforce the performance at low and medium SNRs, where any phase estimator is bound to be hampered by the inherent ambiguity caused by phase wrappings, suitable functionalities are incorporated and shown to be highly effective.","PeriodicalId":73300,"journal":{"name":"IEEE open journal of signal processing","volume":"6 ","pages":"651-681"},"PeriodicalIF":2.9,"publicationDate":"2025-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11027552","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144367013","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE open journal of signal processing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1