首页 > 最新文献

arXiv - CS - Machine Learning最新文献

英文 中文
Early Detection of Coronary Heart Disease Using Hybrid Quantum Machine Learning Approach 利用混合量子机器学习方法早期检测冠心病
Pub Date : 2024-09-17 DOI: arxiv-2409.10932
Mehroush Banday, Sherin Zafar, Parul Agarwal, M Afshar Alam, Abubeker K M
Coronary heart disease (CHD) is a severe cardiac disease, and hence, itsearly diagnosis is essential as it improves treatment results and saves moneyon medical care. The prevailing development of quantum computing and machinelearning (ML) technologies may bring practical improvement to the performanceof CHD diagnosis. Quantum machine learning (QML) is receiving tremendousinterest in various disciplines due to its higher performance and capabilities.A quantum leap in the healthcare industry will increase processing power andoptimise multiple models. Techniques for QML have the potential to forecastcardiac disease and help in early detection. To predict the risk of coronaryheart disease, a hybrid approach utilizing an ensemble machine learning modelbased on QML classifiers is presented in this paper. Our approach, with itsunique ability to address multidimensional healthcare data, reassures themethod's robustness by fusing quantum and classical ML algorithms in amulti-step inferential framework. The marked rise in heart disease and deathrates impacts worldwide human health and the global economy. Reducing cardiacmorbidity and mortality requires early detection of heart disease. In thisresearch, a hybrid approach utilizes techniques with quantum computingcapabilities to tackle complex problems that are not amenable to conventionalmachine learning algorithms and to minimize computational expenses. Theproposed method has been developed in the Raspberry Pi 5 Graphics ProcessingUnit (GPU) platform and tested on a broad dataset that integrates clinical andimaging data from patients suffering from CHD and healthy controls. Compared toclassical machine learning models, the accuracy, sensitivity, F1 score, andspecificity of the proposed hybrid QML model used with CHD are manifold higher.
冠心病(CHD)是一种严重的心脏疾病,因此早期诊断至关重要,因为它能提高治疗效果并节省医疗费用。量子计算和机器学习(ML)技术的蓬勃发展可能会切实改善冠心病的诊断性能。量子机器学习(QML)因其更高的性能和能力而受到各学科的极大关注。量子学习技术具有预测心脏病和帮助早期检测的潜力。为了预测冠心病的风险,本文提出了一种混合方法,利用基于 QML 分类器的集合机器学习模型。我们的方法具有处理多维医疗数据的独特能力,通过在多步推理框架中融合量子和经典 ML 算法,保证了该方法的稳健性。心脏病和死亡率的显著上升影响着全球人类健康和全球经济。降低心脏病发病率和死亡率需要对心脏病进行早期检测。在这项研究中,一种混合方法利用具有量子计算能力的技术来解决传统机器学习算法无法解决的复杂问题,并最大限度地减少计算费用。所提出的方法是在 Raspberry Pi 5 图形处理单元(GPU)平台上开发的,并在一个广泛的数据集上进行了测试,该数据集整合了冠心病患者和健康对照组的临床和成像数据。与经典机器学习模型相比,用于冠心病的混合 QML 模型的准确性、灵敏度、F1 分数和特异性都高出数倍。
{"title":"Early Detection of Coronary Heart Disease Using Hybrid Quantum Machine Learning Approach","authors":"Mehroush Banday, Sherin Zafar, Parul Agarwal, M Afshar Alam, Abubeker K M","doi":"arxiv-2409.10932","DOIUrl":"https://doi.org/arxiv-2409.10932","url":null,"abstract":"Coronary heart disease (CHD) is a severe cardiac disease, and hence, its\u0000early diagnosis is essential as it improves treatment results and saves money\u0000on medical care. The prevailing development of quantum computing and machine\u0000learning (ML) technologies may bring practical improvement to the performance\u0000of CHD diagnosis. Quantum machine learning (QML) is receiving tremendous\u0000interest in various disciplines due to its higher performance and capabilities.\u0000A quantum leap in the healthcare industry will increase processing power and\u0000optimise multiple models. Techniques for QML have the potential to forecast\u0000cardiac disease and help in early detection. To predict the risk of coronary\u0000heart disease, a hybrid approach utilizing an ensemble machine learning model\u0000based on QML classifiers is presented in this paper. Our approach, with its\u0000unique ability to address multidimensional healthcare data, reassures the\u0000method's robustness by fusing quantum and classical ML algorithms in a\u0000multi-step inferential framework. The marked rise in heart disease and death\u0000rates impacts worldwide human health and the global economy. Reducing cardiac\u0000morbidity and mortality requires early detection of heart disease. In this\u0000research, a hybrid approach utilizes techniques with quantum computing\u0000capabilities to tackle complex problems that are not amenable to conventional\u0000machine learning algorithms and to minimize computational expenses. The\u0000proposed method has been developed in the Raspberry Pi 5 Graphics Processing\u0000Unit (GPU) platform and tested on a broad dataset that integrates clinical and\u0000imaging data from patients suffering from CHD and healthy controls. Compared to\u0000classical machine learning models, the accuracy, sensitivity, F1 score, and\u0000specificity of the proposed hybrid QML model used with CHD are manifold higher.","PeriodicalId":501301,"journal":{"name":"arXiv - CS - Machine Learning","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142261892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HMF: A Hybrid Multi-Factor Framework for Dynamic Intraoperative Hypotension Prediction HMF:动态预测术中低血压的混合多因素框架
Pub Date : 2024-09-17 DOI: arxiv-2409.11064
Mingyue Cheng, Jintao Zhang, Zhiding Liu, Chunli Liu, Yanhu Xie
Intraoperative hypotension (IOH) prediction using Mean Arterial Pressure(MAP) is a critical research area with significant implications for patientoutcomes during surgery. However, existing approaches predominantly employstatic modeling paradigms that overlook the dynamic nature of physiologicalsignals. In this paper, we introduce a novel Hybrid Multi-Factor (HMF)framework that reformulates IOH prediction as a blood pressure forecastingtask. Our framework leverages a Transformer encoder, specifically designed toeffectively capture the temporal evolution of MAP series through a patch-basedinput representation, which segments the input physiological series intoinformative patches for accurate analysis. To address the challenges ofdistribution shift in physiological series, our approach incorporates two keyinnovations: (1) Symmetric normalization and de-normalization processes helpmitigate distributional drift in statistical properties, thereby ensuring themodel's robustness across varying conditions, and (2) Sequence decomposition,which disaggregates the input series into trend and seasonal components,allowing for a more precise modeling of inherent sequence dependencies.Extensive experiments conducted on two real-world datasets demonstrate thesuperior performance of our approach compared to competitive baselines,particularly in capturing the nuanced variations in input series that arecrucial for accurate IOH prediction.
利用平均动脉压(MAP)预测术中低血压(IOH)是一个重要的研究领域,对手术期间患者的预后有重大影响。然而,现有方法主要采用静态建模范式,忽略了生理信号的动态特性。在本文中,我们介绍了一种新颖的混合多因素(HMF)框架,它将 IOH 预测重新表述为血压预测任务。我们的框架利用 Transformer 编码器,该编码器专门设计用于通过基于补丁的输入表示有效捕捉 MAP 序列的时间演化,从而将输入的生理序列分割成信息补丁,以便进行精确分析。为了应对生理序列分布偏移的挑战,我们的方法采用了两项关键创新:(1)对称归一化和去归一化过程有助于缓解统计属性的分布偏移,从而确保模型在不同条件下的鲁棒性;(2)序列分解,将输入序列分解为趋势和季节成分,从而可以更精确地模拟固有的序列依赖性。在两个真实世界数据集上进行的广泛实验证明,与竞争基线相比,我们的方法具有更优越的性能,尤其是在捕捉输入序列的细微变化方面,而这对于准确预测 IOH 至关重要。
{"title":"HMF: A Hybrid Multi-Factor Framework for Dynamic Intraoperative Hypotension Prediction","authors":"Mingyue Cheng, Jintao Zhang, Zhiding Liu, Chunli Liu, Yanhu Xie","doi":"arxiv-2409.11064","DOIUrl":"https://doi.org/arxiv-2409.11064","url":null,"abstract":"Intraoperative hypotension (IOH) prediction using Mean Arterial Pressure\u0000(MAP) is a critical research area with significant implications for patient\u0000outcomes during surgery. However, existing approaches predominantly employ\u0000static modeling paradigms that overlook the dynamic nature of physiological\u0000signals. In this paper, we introduce a novel Hybrid Multi-Factor (HMF)\u0000framework that reformulates IOH prediction as a blood pressure forecasting\u0000task. Our framework leverages a Transformer encoder, specifically designed to\u0000effectively capture the temporal evolution of MAP series through a patch-based\u0000input representation, which segments the input physiological series into\u0000informative patches for accurate analysis. To address the challenges of\u0000distribution shift in physiological series, our approach incorporates two key\u0000innovations: (1) Symmetric normalization and de-normalization processes help\u0000mitigate distributional drift in statistical properties, thereby ensuring the\u0000model's robustness across varying conditions, and (2) Sequence decomposition,\u0000which disaggregates the input series into trend and seasonal components,\u0000allowing for a more precise modeling of inherent sequence dependencies.\u0000Extensive experiments conducted on two real-world datasets demonstrate the\u0000superior performance of our approach compared to competitive baselines,\u0000particularly in capturing the nuanced variations in input series that are\u0000crucial for accurate IOH prediction.","PeriodicalId":501301,"journal":{"name":"arXiv - CS - Machine Learning","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142261887","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Machine Learning on Dynamic Functional Connectivity: Promise, Pitfalls, and Interpretations 动态功能连接的机器学习:前景、陷阱与解释
Pub Date : 2024-09-17 DOI: arxiv-2409.11377
Jiaqi Ding, Tingting Dan, Ziquan Wei, Hyuna Cho, Paul J. Laurienti, Won Hwa Kim, Guorong Wu
An unprecedented amount of existing functional Magnetic Resonance Imaging(fMRI) data provides a new opportunity to understand the relationship betweenfunctional fluctuation and human cognition/behavior using a data-drivenapproach. To that end, tremendous efforts have been made in machine learning topredict cognitive states from evolving volumetric images ofblood-oxygen-level-dependent (BOLD) signals. Due to the complex nature of brainfunction, however, the evaluation on learning performance and discoveries arenot often consistent across current state-of-the-arts (SOTA). By capitalizingon large-scale existing neuroimaging data (34,887 data samples from six publicdatabases), we seek to establish a well-founded empirical guideline fordesigning deep models for functional neuroimages by linking the methodologyunderpinning with knowledge from the neuroscience domain. Specifically, we putthe spotlight on (1) What is the current SOTA performance in cognitive taskrecognition and disease diagnosis using fMRI? (2) What are the limitations ofcurrent deep models? and (3) What is the general guideline for selecting thesuitable machine learning backbone for new neuroimaging applications? We haveconducted a comprehensive evaluation and statistical analysis, in varioussettings, to answer the above outstanding questions.
现有的功能磁共振成像(fMRI)数据量空前巨大,这为利用数据驱动方法了解功能波动与人类认知/行为之间的关系提供了新的机会。为此,人们在机器学习方面做出了巨大努力,以便从不断变化的血氧水平依赖性(BOLD)信号容积图像中预测认知状态。然而,由于大脑功能的复杂性,对学习性能和发现的评估往往与当前的技术水平(SOTA)不一致。通过利用现有的大规模神经影像数据(来自六个公共数据库的 34,887 个数据样本),我们试图通过将方法论基础与神经科学领域的知识联系起来,为功能神经影像深度模型的设计建立一个有理有据的经验指南。具体来说,我们将重点放在:(1)目前 SOTA 在使用 fMRI 进行认知任务识别和疾病诊断方面的表现如何?(2) 目前的深度模型有哪些局限性? (3) 为新的神经成像应用选择合适的机器学习骨干的一般准则是什么?为了回答上述悬而未决的问题,我们在不同的设置下进行了全面的评估和统计分析。
{"title":"Machine Learning on Dynamic Functional Connectivity: Promise, Pitfalls, and Interpretations","authors":"Jiaqi Ding, Tingting Dan, Ziquan Wei, Hyuna Cho, Paul J. Laurienti, Won Hwa Kim, Guorong Wu","doi":"arxiv-2409.11377","DOIUrl":"https://doi.org/arxiv-2409.11377","url":null,"abstract":"An unprecedented amount of existing functional Magnetic Resonance Imaging\u0000(fMRI) data provides a new opportunity to understand the relationship between\u0000functional fluctuation and human cognition/behavior using a data-driven\u0000approach. To that end, tremendous efforts have been made in machine learning to\u0000predict cognitive states from evolving volumetric images of\u0000blood-oxygen-level-dependent (BOLD) signals. Due to the complex nature of brain\u0000function, however, the evaluation on learning performance and discoveries are\u0000not often consistent across current state-of-the-arts (SOTA). By capitalizing\u0000on large-scale existing neuroimaging data (34,887 data samples from six public\u0000databases), we seek to establish a well-founded empirical guideline for\u0000designing deep models for functional neuroimages by linking the methodology\u0000underpinning with knowledge from the neuroscience domain. Specifically, we put\u0000the spotlight on (1) What is the current SOTA performance in cognitive task\u0000recognition and disease diagnosis using fMRI? (2) What are the limitations of\u0000current deep models? and (3) What is the general guideline for selecting the\u0000suitable machine learning backbone for new neuroimaging applications? We have\u0000conducted a comprehensive evaluation and statistical analysis, in various\u0000settings, to answer the above outstanding questions.","PeriodicalId":501301,"journal":{"name":"arXiv - CS - Machine Learning","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142261962","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
On the effects of similarity metrics in decentralized deep learning under distributional shift 论分布式转变下分散深度学习中相似度指标的影响
Pub Date : 2024-09-16 DOI: arxiv-2409.10720
Edvin Listo Zec, Tom Hagander, Eric Ihre-Thomason, Sarunas Girdzijauskas
Decentralized Learning (DL) enables privacy-preserving collaboration amongorganizations or users to enhance the performance of local deep learningmodels. However, model aggregation becomes challenging when client data isheterogeneous, and identifying compatible collaborators without direct dataexchange remains a pressing issue. In this paper, we investigate theeffectiveness of various similarity metrics in DL for identifying peers formodel merging, conducting an empirical analysis across multiple datasets withdistribution shifts. Our research provides insights into the performance ofthese metrics, examining their role in facilitating effective collaboration. Byexploring the strengths and limitations of these metrics, we contribute to thedevelopment of robust DL methods.
去中心化学习(DL)可实现组织或用户之间的隐私保护协作,从而提高本地深度学习模型的性能。然而,当客户端数据异构时,模型聚合就变得具有挑战性,而且在没有直接数据交换的情况下识别兼容的合作者仍然是一个亟待解决的问题。在本文中,我们研究了 DL 中各种相似性指标在识别同行以进行模型合并方面的有效性,并在多个数据集撤回分布变化的情况下进行了实证分析。我们的研究为这些指标的性能提供了见解,考察了它们在促进有效协作中的作用。通过探索这些指标的优势和局限性,我们为开发稳健的 DL 方法做出了贡献。
{"title":"On the effects of similarity metrics in decentralized deep learning under distributional shift","authors":"Edvin Listo Zec, Tom Hagander, Eric Ihre-Thomason, Sarunas Girdzijauskas","doi":"arxiv-2409.10720","DOIUrl":"https://doi.org/arxiv-2409.10720","url":null,"abstract":"Decentralized Learning (DL) enables privacy-preserving collaboration among\u0000organizations or users to enhance the performance of local deep learning\u0000models. However, model aggregation becomes challenging when client data is\u0000heterogeneous, and identifying compatible collaborators without direct data\u0000exchange remains a pressing issue. In this paper, we investigate the\u0000effectiveness of various similarity metrics in DL for identifying peers for\u0000model merging, conducting an empirical analysis across multiple datasets with\u0000distribution shifts. Our research provides insights into the performance of\u0000these metrics, examining their role in facilitating effective collaboration. By\u0000exploring the strengths and limitations of these metrics, we contribute to the\u0000development of robust DL methods.","PeriodicalId":501301,"journal":{"name":"arXiv - CS - Machine Learning","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142261897","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CSKV: Training-Efficient Channel Shrinking for KV Cache in Long-Context Scenarios CSKV:长上下文场景中 KV 高速缓存的高效通道缩减训练
Pub Date : 2024-09-16 DOI: arxiv-2409.10593
Luning Wang, Shiyao Li, Xuefei Ning, Zhihang Yuan, Shengen Yan, Guohao Dai, Yu Wang
Large Language Models (LLMs) have been widely adopted to process long-contexttasks. However, the large memory overhead of the key-value (KV) cache posessignificant challenges in long-context scenarios. Existing training-free KVcache compression methods typically focus on quantization and token pruning,which have compression limits, and excessive sparsity can lead to severeperformance degradation. Other methods design new architectures with less KVoverhead but require significant training overhead. To address the above twodrawbacks, we further explore the redundancy in the channel dimension and applyan architecture-level design with minor training costs. Therefore, we introduceCSKV, a training-efficient Channel Shrinking technique for KV cachecompression: (1) We first analyze the singular value distribution of the KVcache, revealing significant redundancy and compression potential along thechannel dimension. Based on this observation, we propose using low-rankdecomposition for key and value layers and storing the low-dimension features.(2) To preserve model performance, we introduce a bi-branch KV cache, includinga window-based full-precision KV cache and a low-precision compressed KV cache.(3) To reduce the training costs, we minimize the layer-wise reconstructionloss for the compressed KV cache instead of retraining the entire LLMs.Extensive experiments show that CSKV can reduce the memory overhead of the KVcache by 80% while maintaining the model's long-context capability. Moreover,we show that our method can be seamlessly combined with quantization to furtherreduce the memory overhead, achieving a compression ratio of up to 95%.
大型语言模型(LLM)已被广泛用于处理长语境任务。然而,键值(KV)缓存的大内存开销在长上下文场景中构成了重大挑战。现有的免训练 KV 缓存压缩方法通常侧重于量化和标记剪枝,这两种方法都有压缩限制,而且过度稀疏会导致性能严重下降。其他方法设计的新架构具有较少的 KV 开销,但需要大量的训练开销。为了解决上述两个缺点,我们进一步探索了信道维度的冗余性,并采用了训练成本较低的架构级设计。因此,我们为 KV 缓存压缩引入了一种训练效率高的通道收缩技术--CSKV:(1)我们首先分析了 KV 缓存的奇异值分布,发现了通道维度上的显著冗余和压缩潜力。(2) 为了保持模型性能,我们引入了双分支 KV 缓存,包括基于窗口的全精度 KV 缓存和低精度压缩 KV 缓存。(3) 为了降低训练成本,我们最小化了压缩 KV 缓存的分层重构损失,而不是重新训练整个 LLM。此外,我们还证明,我们的方法可以与量化无缝结合,进一步减少内存开销,实现高达 95% 的压缩率。
{"title":"CSKV: Training-Efficient Channel Shrinking for KV Cache in Long-Context Scenarios","authors":"Luning Wang, Shiyao Li, Xuefei Ning, Zhihang Yuan, Shengen Yan, Guohao Dai, Yu Wang","doi":"arxiv-2409.10593","DOIUrl":"https://doi.org/arxiv-2409.10593","url":null,"abstract":"Large Language Models (LLMs) have been widely adopted to process long-context\u0000tasks. However, the large memory overhead of the key-value (KV) cache poses\u0000significant challenges in long-context scenarios. Existing training-free KV\u0000cache compression methods typically focus on quantization and token pruning,\u0000which have compression limits, and excessive sparsity can lead to severe\u0000performance degradation. Other methods design new architectures with less KV\u0000overhead but require significant training overhead. To address the above two\u0000drawbacks, we further explore the redundancy in the channel dimension and apply\u0000an architecture-level design with minor training costs. Therefore, we introduce\u0000CSKV, a training-efficient Channel Shrinking technique for KV cache\u0000compression: (1) We first analyze the singular value distribution of the KV\u0000cache, revealing significant redundancy and compression potential along the\u0000channel dimension. Based on this observation, we propose using low-rank\u0000decomposition for key and value layers and storing the low-dimension features.\u0000(2) To preserve model performance, we introduce a bi-branch KV cache, including\u0000a window-based full-precision KV cache and a low-precision compressed KV cache.\u0000(3) To reduce the training costs, we minimize the layer-wise reconstruction\u0000loss for the compressed KV cache instead of retraining the entire LLMs.\u0000Extensive experiments show that CSKV can reduce the memory overhead of the KV\u0000cache by 80% while maintaining the model's long-context capability. Moreover,\u0000we show that our method can be seamlessly combined with quantization to further\u0000reduce the memory overhead, achieving a compression ratio of up to 95%.","PeriodicalId":501301,"journal":{"name":"arXiv - CS - Machine Learning","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142262014","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LASERS: LAtent Space Encoding for Representations with Sparsity for Generative Modeling LASERS:用于生成建模的稀疏性表征 LAtent Space 编码
Pub Date : 2024-09-16 DOI: arxiv-2409.11184
Xin Li, Anand Sarwate
Learning compact and meaningful latent space representations has been shownto be very useful in generative modeling tasks for visual data. One particularexample is applying Vector Quantization (VQ) in variational autoencoders(VQ-VAEs, VQ-GANs, etc.), which has demonstrated state-of-the-art performancein many modern generative modeling applications. Quantizing the latent spacehas been justified by the assumption that the data themselves are inherentlydiscrete in the latent space (like pixel values). In this paper, we propose analternative representation of the latent space by relaxing the structuralassumption than the VQ formulation. Specifically, we assume that the latentspace can be approximated by a union of subspaces model corresponding to adictionary-based representation under a sparsity constraint. The dictionary islearned/updated during the training process. We apply this approach to look attwo models: Dictionary Learning Variational Autoencoders (DL-VAEs) and DL-VAEswith Generative Adversarial Networks (DL-GANs). We show empirically that ourmore latent space is more expressive and has leads to better representationsthan the VQ approach in terms of reconstruction quality at the expense of asmall computational overhead for the latent space computation. Our results thussuggest that the true benefit of the VQ approach might not be fromdiscretization of the latent space, but rather the lossy compression of thelatent space. We confirm this hypothesis by showing that our sparserepresentations also address the codebook collapse issue as found common inVQ-family models.
在视觉数据的生成建模任务中,学习紧凑而有意义的潜在空间表征已被证明非常有用。其中一个特别的例子就是在变异自动编码器(VQ-VAEs、VQ-GANs 等)中应用矢量量化(VQ),它在许多现代生成建模应用中都表现出了最先进的性能。对潜在空间进行量化的理由是假设数据本身在潜在空间中是离散的(如像素值)。在本文中,我们提出了潜在空间的替代表示方法,与 VQ 方法相比,我们放宽了结构假设。具体来说,我们假设在稀疏性约束条件下,潜空间可以通过与基于字典的表示相对应的子空间模型的联合来近似。字典在训练过程中学习/更新。我们将这种方法应用于两个模型:字典学习变异自动编码器(DL-VAE)和具有生成对抗网络(DL-GAN)的 DL-VAE。我们的实证结果表明,我们的潜空间更具表现力,在重构质量方面比 VQ 方法具有更好的代表性,但潜空间计算的开销较小。因此,我们的研究结果表明,VQ 方法的真正优势可能不是潜空间的离散化,而是对潜空间的有损压缩。我们通过证明我们的稀疏表示也解决了 VQ 系列模型中常见的代码集崩溃问题,从而证实了这一假设。
{"title":"LASERS: LAtent Space Encoding for Representations with Sparsity for Generative Modeling","authors":"Xin Li, Anand Sarwate","doi":"arxiv-2409.11184","DOIUrl":"https://doi.org/arxiv-2409.11184","url":null,"abstract":"Learning compact and meaningful latent space representations has been shown\u0000to be very useful in generative modeling tasks for visual data. One particular\u0000example is applying Vector Quantization (VQ) in variational autoencoders\u0000(VQ-VAEs, VQ-GANs, etc.), which has demonstrated state-of-the-art performance\u0000in many modern generative modeling applications. Quantizing the latent space\u0000has been justified by the assumption that the data themselves are inherently\u0000discrete in the latent space (like pixel values). In this paper, we propose an\u0000alternative representation of the latent space by relaxing the structural\u0000assumption than the VQ formulation. Specifically, we assume that the latent\u0000space can be approximated by a union of subspaces model corresponding to a\u0000dictionary-based representation under a sparsity constraint. The dictionary is\u0000learned/updated during the training process. We apply this approach to look at\u0000two models: Dictionary Learning Variational Autoencoders (DL-VAEs) and DL-VAEs\u0000with Generative Adversarial Networks (DL-GANs). We show empirically that our\u0000more latent space is more expressive and has leads to better representations\u0000than the VQ approach in terms of reconstruction quality at the expense of a\u0000small computational overhead for the latent space computation. Our results thus\u0000suggest that the true benefit of the VQ approach might not be from\u0000discretization of the latent space, but rather the lossy compression of the\u0000latent space. We confirm this hypothesis by showing that our sparse\u0000representations also address the codebook collapse issue as found common in\u0000VQ-family models.","PeriodicalId":501301,"journal":{"name":"arXiv - CS - Machine Learning","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142261900","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Toward Mitigating Sex Bias in Pilot Trainees' Stress and Fatigue Modeling 减少飞行员学员压力和疲劳模型中的性别偏差
Pub Date : 2024-09-16 DOI: arxiv-2409.10676
Rachel Pfeifer, Sudip Vhaduri, Mark Wilson, Julius Keller
While researchers have been trying to understand the stress and fatigue amongpilots, especially pilot trainees, and to develop stress/fatigue models toautomate the process of detecting stress/fatigue, they often do not considerbiases such as sex in those models. However, in a critical profession likeaviation, where the demographic distribution is disproportionately skewed toone sex, it is urgent to mitigate biases for fair and safe model predictions.In this work, we investigate the perceived stress/fatigue of 69 collegestudents, including 40 pilot trainees with around 63% male. We construct modelswith decision trees first without bias mitigation and then with bias mitigationusing a threshold optimizer with demographic parity and equalized oddsconstraints 30 times with random instances. Using bias mitigation, we achieveimprovements of 88.31% (demographic parity difference) and 54.26% (equalizedodds difference), which are also found to be statistically significant.
虽然研究人员一直在努力了解飞行员,尤其是受训飞行员的压力和疲劳情况,并开发压力/疲劳模型来自动化检测压力/疲劳的过程,但他们往往没有考虑这些模型中的性别等偏差。然而,在航空这样一个关键职业中,人口分布不成比例地偏向于一种性别,因此迫切需要减少偏差以实现公平、安全的模型预测。在这项工作中,我们调查了 69 名大学生的压力/疲劳感知,其中包括 40 名飞行员学员,男性约占 63%。我们先用决策树构建了无偏差缓和模型,然后使用具有人口奇偶性和均衡几率约束的阈值优化器构建了有偏差缓和的模型,并对随机实例进行了 30 次优化。使用偏差缓解后,我们的结果提高了 88.31%(人口奇偶校验差异)和 54.26%(均衡赔率差异),这两个结果在统计上也是显著的。
{"title":"Toward Mitigating Sex Bias in Pilot Trainees' Stress and Fatigue Modeling","authors":"Rachel Pfeifer, Sudip Vhaduri, Mark Wilson, Julius Keller","doi":"arxiv-2409.10676","DOIUrl":"https://doi.org/arxiv-2409.10676","url":null,"abstract":"While researchers have been trying to understand the stress and fatigue among\u0000pilots, especially pilot trainees, and to develop stress/fatigue models to\u0000automate the process of detecting stress/fatigue, they often do not consider\u0000biases such as sex in those models. However, in a critical profession like\u0000aviation, where the demographic distribution is disproportionately skewed to\u0000one sex, it is urgent to mitigate biases for fair and safe model predictions.\u0000In this work, we investigate the perceived stress/fatigue of 69 college\u0000students, including 40 pilot trainees with around 63% male. We construct models\u0000with decision trees first without bias mitigation and then with bias mitigation\u0000using a threshold optimizer with demographic parity and equalized odds\u0000constraints 30 times with random instances. Using bias mitigation, we achieve\u0000improvements of 88.31% (demographic parity difference) and 54.26% (equalized\u0000odds difference), which are also found to be statistically significant.","PeriodicalId":501301,"journal":{"name":"arXiv - CS - Machine Learning","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142262010","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Motion Forecasting via Model-Based Risk Minimization 通过基于模型的风险最小化进行运动预测
Pub Date : 2024-09-16 DOI: arxiv-2409.10585
Aron Distelzweig, Eitan Kosman, Andreas Look, Faris Janjoš, Denesh K. Manivannan, Abhinav Valada
Forecasting the future trajectories of surrounding agents is crucial forautonomous vehicles to ensure safe, efficient, and comfortable route planning.While model ensembling has improved prediction accuracy in various fields, itsapplication in trajectory prediction is limited due to the multi-modal natureof predictions. In this paper, we propose a novel sampling method applicable totrajectory prediction based on the predictions of multiple models. We firstshow that conventional sampling based on predicted probabilities can degradeperformance due to missing alignment between models. To address this problem,we introduce a new method that generates optimal trajectories from a set ofneural networks, framing it as a risk minimization problem with a variable lossfunction. By using state-of-the-art models as base learners, our approachconstructs diverse and effective ensembles for optimal trajectory sampling.Extensive experiments on the nuScenes prediction dataset demonstrate that ourmethod surpasses current state-of-the-art techniques, achieving top ranks onthe leaderboard. We also provide a comprehensive empirical study on ensemblingstrategies, offering insights into their effectiveness. Our findings highlightthe potential of advanced ensembling techniques in trajectory prediction,significantly improving predictive performance and paving the way for morereliable predicted trajectories.
预测周围物体的未来轨迹对于自动驾驶车辆确保安全、高效和舒适的路线规划至关重要。虽然模型集合提高了各个领域的预测精度,但由于预测的多模式特性,其在轨迹预测中的应用受到了限制。在本文中,我们根据多个模型的预测结果,提出了一种适用于轨迹预测的新型采样方法。我们首先证明,传统的基于预测概率的采样会因模型间对齐缺失而降低性能。为了解决这个问题,我们引入了一种新方法,它能从一组神经网络中生成最优轨迹,并将其视为具有可变损失函数的风险最小化问题。在 nuScenes 预测数据集上的广泛实验证明,我们的方法超越了当前最先进的技术,在排行榜上名列前茅。我们还对集合策略进行了全面的实证研究,深入了解了它们的有效性。我们的研究结果凸显了先进的集合技术在轨迹预测中的潜力,大大提高了预测性能,为更可靠的轨迹预测铺平了道路。
{"title":"Motion Forecasting via Model-Based Risk Minimization","authors":"Aron Distelzweig, Eitan Kosman, Andreas Look, Faris Janjoš, Denesh K. Manivannan, Abhinav Valada","doi":"arxiv-2409.10585","DOIUrl":"https://doi.org/arxiv-2409.10585","url":null,"abstract":"Forecasting the future trajectories of surrounding agents is crucial for\u0000autonomous vehicles to ensure safe, efficient, and comfortable route planning.\u0000While model ensembling has improved prediction accuracy in various fields, its\u0000application in trajectory prediction is limited due to the multi-modal nature\u0000of predictions. In this paper, we propose a novel sampling method applicable to\u0000trajectory prediction based on the predictions of multiple models. We first\u0000show that conventional sampling based on predicted probabilities can degrade\u0000performance due to missing alignment between models. To address this problem,\u0000we introduce a new method that generates optimal trajectories from a set of\u0000neural networks, framing it as a risk minimization problem with a variable loss\u0000function. By using state-of-the-art models as base learners, our approach\u0000constructs diverse and effective ensembles for optimal trajectory sampling.\u0000Extensive experiments on the nuScenes prediction dataset demonstrate that our\u0000method surpasses current state-of-the-art techniques, achieving top ranks on\u0000the leaderboard. We also provide a comprehensive empirical study on ensembling\u0000strategies, offering insights into their effectiveness. Our findings highlight\u0000the potential of advanced ensembling techniques in trajectory prediction,\u0000significantly improving predictive performance and paving the way for more\u0000reliable predicted trajectories.","PeriodicalId":501301,"journal":{"name":"arXiv - CS - Machine Learning","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142269700","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Offline Reinforcement Learning for Learning to Dispatch for Job Shop Scheduling 离线强化学习用于工作车间调度的调度学习
Pub Date : 2024-09-16 DOI: arxiv-2409.10589
Jesse van Remmerden, Zaharah Bukhsh, Yingqian Zhang
The Job Shop Scheduling Problem (JSSP) is a complex combinatorialoptimization problem. There has been growing interest in using onlineReinforcement Learning (RL) for JSSP. While online RL can quickly findacceptable solutions, especially for larger problems, it produces lower-qualityresults than traditional methods like Constraint Programming (CP). Asignificant downside of online RL is that it cannot learn from existing data,such as solutions generated from CP, requiring them to train from scratch,leading to sample inefficiency and making them unable to learn from moreoptimal examples. We introduce Offline Reinforcement Learning for Learning toDispatch (Offline-LD), a novel approach for JSSP that addresses theselimitations. Offline-LD adapts two CQL-based Q-learning methods (mQRDQN anddiscrete mSAC) for maskable action spaces, introduces a new entropy bonusmodification for discrete SAC, and exploits reward normalization throughpreprocessing. Our experiments show that Offline-LD outperforms online RL onboth generated and benchmark instances. By introducing noise into the dataset,we achieve similar or better results than those obtained from the expertdataset, indicating that a more diverse training set is preferable because itcontains counterfactual information.
作业车间调度问题(JSSP)是一个复杂的组合优化问题。人们对在 JSSP 中使用在线强化学习(RL)越来越感兴趣。虽然在线强化学习可以快速找到可接受的解决方案,特别是对于较大的问题,但与约束编程(CP)等传统方法相比,它产生的结果质量较低。在线 RL 的一个显著缺点是它不能从现有数据中学习,例如从 CP 生成的解决方案,这就要求它们从头开始训练,从而导致样本效率低下,并且无法从更优化的示例中学习。我们引入了离线强化学习(Offline Reinforcement Learning for Learning toDispatch,简称 Offline-LD),这是一种用于 JSSP 的新方法,可以解决上述限制。Offline-LD 对两种基于 CQL 的 Q-learning 方法(mQRDQN 和离散 mSAC)进行了调整,适用于可掩蔽的行动空间,为离散 SAC 引入了一种新的熵奖励修正,并通过预处理利用奖励归一化。我们的实验表明,在生成实例和基准实例上,离线-LD 的表现都优于在线 RL。通过在数据集中引入噪声,我们获得了与从专家数据集中获得的结果相似甚至更好的结果,这表明更多样化的训练集是更可取的,因为它包含了反事实信息。
{"title":"Offline Reinforcement Learning for Learning to Dispatch for Job Shop Scheduling","authors":"Jesse van Remmerden, Zaharah Bukhsh, Yingqian Zhang","doi":"arxiv-2409.10589","DOIUrl":"https://doi.org/arxiv-2409.10589","url":null,"abstract":"The Job Shop Scheduling Problem (JSSP) is a complex combinatorial\u0000optimization problem. There has been growing interest in using online\u0000Reinforcement Learning (RL) for JSSP. While online RL can quickly find\u0000acceptable solutions, especially for larger problems, it produces lower-quality\u0000results than traditional methods like Constraint Programming (CP). A\u0000significant downside of online RL is that it cannot learn from existing data,\u0000such as solutions generated from CP, requiring them to train from scratch,\u0000leading to sample inefficiency and making them unable to learn from more\u0000optimal examples. We introduce Offline Reinforcement Learning for Learning to\u0000Dispatch (Offline-LD), a novel approach for JSSP that addresses these\u0000limitations. Offline-LD adapts two CQL-based Q-learning methods (mQRDQN and\u0000discrete mSAC) for maskable action spaces, introduces a new entropy bonus\u0000modification for discrete SAC, and exploits reward normalization through\u0000preprocessing. Our experiments show that Offline-LD outperforms online RL on\u0000both generated and benchmark instances. By introducing noise into the dataset,\u0000we achieve similar or better results than those obtained from the expert\u0000dataset, indicating that a more diverse training set is preferable because it\u0000contains counterfactual information.","PeriodicalId":501301,"journal":{"name":"arXiv - CS - Machine Learning","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142261899","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Federated Learning for Smart Grid: A Survey on Applications and Potential Vulnerabilities 智能电网的联合学习:关于应用和潜在漏洞的调查
Pub Date : 2024-09-16 DOI: arxiv-2409.10764
Zikai Zhang, Suman Rath, Jiaohao Xu, Tingsong Xiao
The Smart Grid (SG) is a critical energy infrastructure that collectsreal-time electricity usage data to forecast future energy demands usinginformation and communication technologies (ICT). Due to growing concerns aboutdata security and privacy in SGs, federated learning (FL) has emerged as apromising training framework. FL offers a balance between privacy, efficiency,and accuracy in SGs by enabling collaborative model training without sharingprivate data from IoT devices. In this survey, we thoroughly review recentadvancements in designing FL-based SG systems across three stages: generation,transmission and distribution, and consumption. Additionally, we explorepotential vulnerabilities that may arise when implementing FL in these stages.Finally, we discuss the gap between state-of-the-art FL research and itspractical applications in SGs and propose future research directions. Thesefocus on potential attack and defense strategies for FL-based SG systems andthe need to build a robust FL-based SG infrastructure. Unlike traditionalsurveys that address security issues in centralized machine learning methodsfor SG systems, this survey specifically examines the applications and securityconcerns in FL-based SG systems for the first time. Our aim is to inspirefurther research into applications and improvements in the robustness ofFL-based SG systems.
智能电网(SG)是一种重要的能源基础设施,它收集实时用电数据,利用信息和通信技术(ICT)预测未来的能源需求。由于人们越来越关注智能电网中的数据安全和隐私问题,联合学习(FL)已成为一种新兴的培训框架。联合学习通过在不共享物联网设备隐私数据的情况下进行协作模型训练,在SG中实现了隐私、效率和准确性之间的平衡。在本研究中,我们全面回顾了在发电、输配电和消费三个阶段设计基于 FL 的 SG 系统的最新进展。最后,我们讨论了最先进的 FL 研究与其在 SG 中的实际应用之间的差距,并提出了未来的研究方向。这些研究的重点是基于 FL 的 SG 系统的潜在攻击和防御策略,以及建立强大的基于 FL 的 SG 基础设施的必要性。与传统的针对 SG 系统中集中式机器学习方法的安全问题的调查不同,本调查首次专门研究了基于 FL 的 SG 系统中的应用和安全问题。我们的目的是激发对基于 FL 的 SG 系统的应用和鲁棒性改进的进一步研究。
{"title":"Federated Learning for Smart Grid: A Survey on Applications and Potential Vulnerabilities","authors":"Zikai Zhang, Suman Rath, Jiaohao Xu, Tingsong Xiao","doi":"arxiv-2409.10764","DOIUrl":"https://doi.org/arxiv-2409.10764","url":null,"abstract":"The Smart Grid (SG) is a critical energy infrastructure that collects\u0000real-time electricity usage data to forecast future energy demands using\u0000information and communication technologies (ICT). Due to growing concerns about\u0000data security and privacy in SGs, federated learning (FL) has emerged as a\u0000promising training framework. FL offers a balance between privacy, efficiency,\u0000and accuracy in SGs by enabling collaborative model training without sharing\u0000private data from IoT devices. In this survey, we thoroughly review recent\u0000advancements in designing FL-based SG systems across three stages: generation,\u0000transmission and distribution, and consumption. Additionally, we explore\u0000potential vulnerabilities that may arise when implementing FL in these stages.\u0000Finally, we discuss the gap between state-of-the-art FL research and its\u0000practical applications in SGs and propose future research directions. These\u0000focus on potential attack and defense strategies for FL-based SG systems and\u0000the need to build a robust FL-based SG infrastructure. Unlike traditional\u0000surveys that address security issues in centralized machine learning methods\u0000for SG systems, this survey specifically examines the applications and security\u0000concerns in FL-based SG systems for the first time. Our aim is to inspire\u0000further research into applications and improvements in the robustness of\u0000FL-based SG systems.","PeriodicalId":501301,"journal":{"name":"arXiv - CS - Machine Learning","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142261896","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
arXiv - CS - Machine Learning
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1