首页 > 最新文献

Journal of King Saud University-Computer and Information Sciences最新文献

英文 中文
On-chain zero-knowledge machine learning: An overview and comparison 链上零知识机器学习:概述与比较
IF 5.2 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-10-05 DOI: 10.1016/j.jksuci.2024.102207
Vid Keršič, Sašo Karakatič, Muhamed Turkanović
Zero-knowledge proofs introduce a mechanism to prove that certain computations were performed without revealing any underlying information and are used commonly in blockchain-based decentralized apps (dapps). This cryptographic technique addresses trust issues prevalent in blockchain applications, and has now been adapted for machine learning (ML) services, known as Zero-Knowledge Machine Learning (ZKML). By leveraging the distributed nature of blockchains, this approach enhances the trustworthiness of ML deployments, and opens up new possibilities for privacy-preserving and robust ML applications within dapps. This paper provides a comprehensive overview of the ZKML process and its critical components for verifying ML services on-chain. Furthermore, this paper explores how blockchain technology and smart contracts can offer verifiable, trustless proof that a specific ML model has been used correctly to perform inference, all without relying on a single trusted entity. Additionally, the paper compares and reviews existing frameworks for implementing ZKML in dapps, serving as a reference point for researchers interested in this emerging field.
零知识证明引入了一种机制,用于证明某些计算是在不透露任何底层信息的情况下进行的,常用于基于区块链的去中心化应用程序(dapps)。这种加密技术解决了区块链应用中普遍存在的信任问题,现在已被用于机器学习(ML)服务,即零知识机器学习(ZKML)。通过利用区块链的分布式特性,这种方法提高了 ML 部署的可信度,并为 dapps 中保护隐私和稳健的 ML 应用开辟了新的可能性。本文全面概述了 ZKML 流程及其用于验证链上 ML 服务的关键组件。此外,本文还探讨了区块链技术和智能合约如何提供可验证的无信任证明,证明特定的 ML 模型已被正确用于执行推理,而无需依赖单一的可信实体。此外,本文还比较和回顾了在 dapp 中实施 ZKML 的现有框架,为对这一新兴领域感兴趣的研究人员提供了参考。
{"title":"On-chain zero-knowledge machine learning: An overview and comparison","authors":"Vid Keršič,&nbsp;Sašo Karakatič,&nbsp;Muhamed Turkanović","doi":"10.1016/j.jksuci.2024.102207","DOIUrl":"10.1016/j.jksuci.2024.102207","url":null,"abstract":"<div><div>Zero-knowledge proofs introduce a mechanism to prove that certain computations were performed without revealing any underlying information and are used commonly in blockchain-based decentralized apps (dapps). This cryptographic technique addresses trust issues prevalent in blockchain applications, and has now been adapted for machine learning (ML) services, known as Zero-Knowledge Machine Learning (ZKML). By leveraging the distributed nature of blockchains, this approach enhances the trustworthiness of ML deployments, and opens up new possibilities for privacy-preserving and robust ML applications within dapps. This paper provides a comprehensive overview of the ZKML process and its critical components for verifying ML services on-chain. Furthermore, this paper explores how blockchain technology and smart contracts can offer verifiable, trustless proof that a specific ML model has been used correctly to perform inference, all without relying on a single trusted entity. Additionally, the paper compares and reviews existing frameworks for implementing ZKML in dapps, serving as a reference point for researchers interested in this emerging field.</div></div>","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":"36 9","pages":"Article 102207"},"PeriodicalIF":5.2,"publicationDate":"2024-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142424359","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IPSRM: An intent perceived sequential recommendation model IPSRM:意图感知顺序推荐模型
IF 5.2 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-10-05 DOI: 10.1016/j.jksuci.2024.102206
Chaoran Wang , Mingyang Wang , Xianjie Wang , Yingchun Tan

Objectives:

Sequential recommendation aims to recommend items that are relevant to users’ interests based on their existing interaction sequences. Current models lack in capturing users’ latent intentions and do not sufficiently consider sequence information during the modeling of users and items. Additionally, noise in user interaction sequences can affect the model’s optimization process.

Methods:

This paper introduces an intent perceived sequential recommendation model (IPSRM). IPSRM employs the generalized expectation–maximization (EM) framework, alternating between learning sequence representations and optimizing the model to better capture the underlying intentions of user interactions. Specifically, IPSRM maps unlabeled behavioral sequences into frequency domain filtering and random Gaussian distribution space. These mappings reduce the impact of noise and improve the learning of user behavior representations. Through clustering process, IPSRM captures users’ potential interaction intentions and incorporates them as one of the supervisions into the contrastive self-supervised learning process to guide the optimization process.

Results:

Experimental results on four standard datasets demonstrate the superiority of IPSRM. Comparative experiments also verify that IPSRM exhibits strong robustness under cold start and noisy interaction conditions.

Conclusions:

Capturing latent user intentions, integrating intention-based supervision into model optimization, and mitigating noise in sequential modeling significantly enhance the performance of sequential recommendation systems.
目标:序列推荐旨在根据用户现有的交互序列,推荐与用户兴趣相关的项目。目前的模型无法捕捉用户的潜在意图,在用户和项目建模过程中也没有充分考虑序列信息。此外,用户互动序列中的噪声也会影响模型的优化过程。方法:本文介绍了一种意图感知序列推荐模型(IPSRM)。IPSRM采用广义期望最大化(EM)框架,在学习序列表示和优化模型之间交替进行,以更好地捕捉用户交互的潜在意图。具体来说,IPSRM 将未标记的行为序列映射到频域滤波和随机高斯分布空间中。这些映射降低了噪声的影响,提高了用户行为表征的学习能力。通过聚类过程,IPSRM 捕捉到了用户潜在的交互意图,并将其作为监督之一纳入对比自监督学习过程,以指导优化过程。结果:在四个标准数据集上的实验结果证明了 IPSRM 的优越性。结论:捕捉潜在用户意图、将基于意图的监督整合到模型优化中,以及在顺序建模中减少噪声,都能显著提高顺序推荐系统的性能。
{"title":"IPSRM: An intent perceived sequential recommendation model","authors":"Chaoran Wang ,&nbsp;Mingyang Wang ,&nbsp;Xianjie Wang ,&nbsp;Yingchun Tan","doi":"10.1016/j.jksuci.2024.102206","DOIUrl":"10.1016/j.jksuci.2024.102206","url":null,"abstract":"<div><h3>Objectives:</h3><div>Sequential recommendation aims to recommend items that are relevant to users’ interests based on their existing interaction sequences. Current models lack in capturing users’ latent intentions and do not sufficiently consider sequence information during the modeling of users and items. Additionally, noise in user interaction sequences can affect the model’s optimization process.</div></div><div><h3>Methods:</h3><div>This paper introduces an intent perceived sequential recommendation model (IPSRM). IPSRM employs the generalized expectation–maximization (EM) framework, alternating between learning sequence representations and optimizing the model to better capture the underlying intentions of user interactions. Specifically, IPSRM maps unlabeled behavioral sequences into frequency domain filtering and random Gaussian distribution space. These mappings reduce the impact of noise and improve the learning of user behavior representations. Through clustering process, IPSRM captures users’ potential interaction intentions and incorporates them as one of the supervisions into the contrastive self-supervised learning process to guide the optimization process.</div></div><div><h3>Results:</h3><div>Experimental results on four standard datasets demonstrate the superiority of IPSRM. Comparative experiments also verify that IPSRM exhibits strong robustness under cold start and noisy interaction conditions.</div></div><div><h3>Conclusions:</h3><div>Capturing latent user intentions, integrating intention-based supervision into model optimization, and mitigating noise in sequential modeling significantly enhance the performance of sequential recommendation systems.</div></div>","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":"36 9","pages":"Article 102206"},"PeriodicalIF":5.2,"publicationDate":"2024-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142424360","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Real-time segmentation and classification of whole-slide images for tumor biomarker scoring 用于肿瘤生物标记物评分的全切片图像实时分割和分类
IF 5.2 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-10-05 DOI: 10.1016/j.jksuci.2024.102204
Md Jahid Hasan , Wan Siti Halimatul Munirah Wan Ahmad , Mohammad Faizal Ahmad Fauzi , Jenny Tung Hiong Lee , See Yee Khor , Lai Meng Looi , Fazly Salleh Abas , Afzan Adam , Elaine Wan Ling Chan
Histopathology image segmentation and classification are essential for diagnosing and treating breast cancer. This study introduced a highly accurate segmentation and classification for histopathology images using a single architecture. We utilized the famous segmentation architectures, SegNet and U-Net, and modified the decoder to attach ResNet, VGG and DenseNet to perform classification tasks. These hybrid models are integrated with Stardist as the backbone, and implemented in a real-time pathologist workflow with a graphical user interface. These models were trained and tested offline using the ER-IHC-stained private and H&E-stained public datasets (MoNuSeg). For real-time evaluation, the proposed model was evaluated using PR-IHC-stained glass slides. It achieved the highest segmentation pixel-based F1-score of 0.902 and 0.903 for private and public datasets respectively, and a classification-based F1-score of 0.833 for private dataset. The experiment shows the robustness of our method where a model trained on ER-IHC dataset able to perform well on real-time microscopy of PR-IHC slides on both 20x and 40x magnification. This will help the pathologists with a quick decision-making process.
组织病理学图像分割和分类对于诊断和治疗乳腺癌至关重要。本研究采用单一架构对组织病理学图像进行高精度分割和分类。我们利用了著名的分割架构 SegNet 和 U-Net,并修改了解码器以附加 ResNet、VGG 和 DenseNet 来执行分类任务。这些混合模型与作为骨干的 Stardist 集成,并通过图形用户界面在病理学家实时工作流程中实施。使用 ER-IHC 染色私人数据集和 H&E 染色公共数据集 (MoNuSeg) 对这些模型进行了离线训练和测试。为了进行实时评估,使用 PR-IHC 染色玻璃切片对所提出的模型进行了评估。在私人数据集和公共数据集上,基于像素的分割 F1 分数分别为 0.902 和 0.903,在私人数据集上,基于分类的 F1 分数为 0.833。实验显示了我们方法的鲁棒性,在 ER-IHC 数据集上训练的模型能够在 20 倍和 40 倍放大率的 PR-IHC 切片实时显微镜检查中表现良好。这将有助于病理学家快速做出决策。
{"title":"Real-time segmentation and classification of whole-slide images for tumor biomarker scoring","authors":"Md Jahid Hasan ,&nbsp;Wan Siti Halimatul Munirah Wan Ahmad ,&nbsp;Mohammad Faizal Ahmad Fauzi ,&nbsp;Jenny Tung Hiong Lee ,&nbsp;See Yee Khor ,&nbsp;Lai Meng Looi ,&nbsp;Fazly Salleh Abas ,&nbsp;Afzan Adam ,&nbsp;Elaine Wan Ling Chan","doi":"10.1016/j.jksuci.2024.102204","DOIUrl":"10.1016/j.jksuci.2024.102204","url":null,"abstract":"<div><div>Histopathology image segmentation and classification are essential for diagnosing and treating breast cancer. This study introduced a highly accurate segmentation and classification for histopathology images using a single architecture. We utilized the famous segmentation architectures, SegNet and U-Net, and modified the decoder to attach ResNet, VGG and DenseNet to perform classification tasks. These hybrid models are integrated with Stardist as the backbone, and implemented in a real-time pathologist workflow with a graphical user interface. These models were trained and tested offline using the ER-IHC-stained private and H&amp;E-stained public datasets (MoNuSeg). For real-time evaluation, the proposed model was evaluated using PR-IHC-stained glass slides. It achieved the highest segmentation pixel-based F1-score of 0.902 and 0.903 for private and public datasets respectively, and a classification-based F1-score of 0.833 for private dataset. The experiment shows the robustness of our method where a model trained on ER-IHC dataset able to perform well on real-time microscopy of PR-IHC slides on both 20x and 40x magnification. This will help the pathologists with a quick decision-making process.</div></div>","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":"36 9","pages":"Article 102204"},"PeriodicalIF":5.2,"publicationDate":"2024-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142529768","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dual-stream dynamic graph structure network for document-level relation extraction 用于文档级关系提取的双流动态图结构网络
IF 5.2 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-10-03 DOI: 10.1016/j.jksuci.2024.102202
Yu Zhong, Bo Shen
Extracting structured information from unstructured text is crucial for knowledge management and utilization, which is the goal of document-level relation extraction. Existing graph-based methods face issues with information confusion and integration, limiting the reasoning capabilities of the model. To tackle this problem, a dual-stream dynamic graph structural network is proposed to model documents from various perspectives. Leveraging the richness of document information, a static document heterogeneous graph is constructed. A dynamic heterogeneous document graph is then induced based on this foundation to facilitate global information aggregation for entity representation learning. Additionally, the static document graph is decomposed into multi-level static semantic graphs, and multi-layer dynamic semantic graphs are further induced, explicitly segregating information from different levels. Information from different streams is effectively integrated via an information integrator. To mitigate the interference of noise during the reasoning process, a noise regularization mechanism is also designed. The experimental results on three extensively utilized publicly accessible datasets for document-level relation extraction demonstrate that our model achieves F1 scores of 62.56%, 71.1%, and 86.9% on the DocRED, CDR, and GDA datasets, respectively, significantly outperforming the baselines. Further analysis also demonstrates the effectiveness of the model in multi-entity scenarios.
从非结构化文本中提取结构化信息对于知识管理和利用至关重要,这也是文档级关系提取的目标。现有的基于图的方法面临着信息混淆和整合的问题,限制了模型的推理能力。为解决这一问题,我们提出了一种双流动态图结构网络,从不同角度对文档进行建模。利用丰富的文档信息,构建静态文档异构图。然后在此基础上诱导出动态异构文档图,以促进实体表征学习的全局信息聚合。此外,静态文档图被分解成多层次的静态语义图,并进一步诱导出多层次的动态语义图,明确分离来自不同层次的信息。来自不同信息流的信息通过信息集成器进行有效集成。为了减少推理过程中的噪声干扰,还设计了噪声正则化机制。在三个广泛使用的公开文档级关系提取数据集上的实验结果表明,我们的模型在 DocRED、CDR 和 GDA 数据集上的 F1 分数分别达到了 62.56%、71.1% 和 86.9%,明显优于基线模型。进一步的分析还证明了该模型在多实体场景中的有效性。
{"title":"Dual-stream dynamic graph structure network for document-level relation extraction","authors":"Yu Zhong,&nbsp;Bo Shen","doi":"10.1016/j.jksuci.2024.102202","DOIUrl":"10.1016/j.jksuci.2024.102202","url":null,"abstract":"<div><div>Extracting structured information from unstructured text is crucial for knowledge management and utilization, which is the goal of document-level relation extraction. Existing graph-based methods face issues with information confusion and integration, limiting the reasoning capabilities of the model. To tackle this problem, a dual-stream dynamic graph structural network is proposed to model documents from various perspectives. Leveraging the richness of document information, a static document heterogeneous graph is constructed. A dynamic heterogeneous document graph is then induced based on this foundation to facilitate global information aggregation for entity representation learning. Additionally, the static document graph is decomposed into multi-level static semantic graphs, and multi-layer dynamic semantic graphs are further induced, explicitly segregating information from different levels. Information from different streams is effectively integrated via an information integrator. To mitigate the interference of noise during the reasoning process, a noise regularization mechanism is also designed. The experimental results on three extensively utilized publicly accessible datasets for document-level relation extraction demonstrate that our model achieves F1 scores of 62.56%, 71.1%, and 86.9% on the DocRED, CDR, and GDA datasets, respectively, significantly outperforming the baselines. Further analysis also demonstrates the effectiveness of the model in multi-entity scenarios.</div></div>","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":"36 9","pages":"Article 102202"},"PeriodicalIF":5.2,"publicationDate":"2024-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142424440","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ParaU-Net: An improved UNet parallel coding network for lung nodule segmentation ParaU-Net:用于肺结节分割的改进型 UNet 并行编码网络
IF 5.2 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-10-01 DOI: 10.1016/j.jksuci.2024.102203
Yingqi Lu , Xiangsuo Fan , Jinfeng Wang , Shaojun Chen , Jie Meng
Accurate segmentation of lung nodules is crucial for the early detection of lung cancer and other pulmonary diseases. Traditional segmentation methods face several challenges, such as the overlap between nodules and surrounding anatomical structures like blood vessels and bronchi, as well as the variability in nodule size and shape, which complicates the segmentation algorithms. Existing methods often inadequately address these issues, highlighting the need for a more effective solution. To address these challenges, this paper proposes an improved multi-scale parallel fusion encoding network, ParaU-Net. ParaU-Net enhances the segmentation accuracy and model performance by optimizing the encoding process, improving feature extraction, preserving down-sampling information, and expanding the receptive field. Specifically, the multi-scale parallel fusion mechanism introduced in ParaU-Net better captures the fine features of nodules and reduces interference from other structures. Experiments conducted on the LIDC (The Lung Image Database Consortium) public dataset demonstrate the excellent performance of ParaU-Net in segmentation tasks, with results showing an IoU of 87.15%, Dice of 92.16%, F1-score of 92.24%, F2-score of 92.33%, and F0.5-score of 92.69%. These results significantly outperform other advanced segmentation methods, validating the effectiveness and accuracy of the proposed model in lung nodule CT image analysis. The code is available at https://github.com/XiaoBai-Lyq/ParaU-Net.
准确分割肺结节对于早期检测肺癌和其他肺部疾病至关重要。传统的分割方法面临着一些挑战,例如结节与周围解剖结构(如血管和支气管)之间的重叠,以及结节大小和形状的可变性,这些都使分割算法变得复杂。现有方法往往无法充分解决这些问题,因此需要更有效的解决方案。为了应对这些挑战,本文提出了一种改进的多尺度并行融合编码网络 ParaU-Net。ParaU-Net 通过优化编码过程、改进特征提取、保留向下采样信息和扩大感受野来提高分割精度和模型性能。具体来说,ParaU-Net 引入的多尺度并行融合机制能更好地捕捉结节的精细特征,并减少其他结构的干扰。在 LIDC(肺部图像数据库联盟)公共数据集上进行的实验证明了 ParaU-Net 在分割任务中的卓越性能,结果显示 IoU 为 87.15%,Dice 为 92.16%,F1-score 为 92.24%,F2-score 为 92.33%,F0.5-score 为 92.69%。这些结果明显优于其他先进的分割方法,验证了所提模型在肺结节 CT 图像分析中的有效性和准确性。代码见 https://github.com/XiaoBai-Lyq/ParaU-Net。
{"title":"ParaU-Net: An improved UNet parallel coding network for lung nodule segmentation","authors":"Yingqi Lu ,&nbsp;Xiangsuo Fan ,&nbsp;Jinfeng Wang ,&nbsp;Shaojun Chen ,&nbsp;Jie Meng","doi":"10.1016/j.jksuci.2024.102203","DOIUrl":"10.1016/j.jksuci.2024.102203","url":null,"abstract":"<div><div>Accurate segmentation of lung nodules is crucial for the early detection of lung cancer and other pulmonary diseases. Traditional segmentation methods face several challenges, such as the overlap between nodules and surrounding anatomical structures like blood vessels and bronchi, as well as the variability in nodule size and shape, which complicates the segmentation algorithms. Existing methods often inadequately address these issues, highlighting the need for a more effective solution. To address these challenges, this paper proposes an improved multi-scale parallel fusion encoding network, ParaU-Net. ParaU-Net enhances the segmentation accuracy and model performance by optimizing the encoding process, improving feature extraction, preserving down-sampling information, and expanding the receptive field. Specifically, the multi-scale parallel fusion mechanism introduced in ParaU-Net better captures the fine features of nodules and reduces interference from other structures. Experiments conducted on the LIDC (The Lung Image Database Consortium) public dataset demonstrate the excellent performance of ParaU-Net in segmentation tasks, with results showing an IoU of 87.15%, Dice of 92.16%, F1-score of 92.24%, F2-score of 92.33%, and F0.5-score of 92.69%. These results significantly outperform other advanced segmentation methods, validating the effectiveness and accuracy of the proposed model in lung nodule CT image analysis. The code is available at <span><span>https://github.com/XiaoBai-Lyq/ParaU-Net</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":"36 9","pages":"Article 102203"},"PeriodicalIF":5.2,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142424358","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LungNeXt: A novel lightweight network utilizing enhanced mel-spectrogram for lung sound classification LungNeXt:利用增强型 Mel 光谱图进行肺音分类的新型轻量级网络
IF 5.2 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-10-01 DOI: 10.1016/j.jksuci.2024.102200
Fan Wang , Xiaochen Yuan , Yue Liu , Chan-Tong Lam
Lung auscultation is essential for early lung condition detection. Categorizing adventitious lung sounds requires expert discrimination by medical specialists. This paper details the features of LungNeXt, a novel classification model specifically designed for lung sound analysis. Furthermore, we propose two auxiliary methods: RandClipMix (RCM) for data augmentation and Enhanced Mel-Spectrogram for Feature Extraction (EMFE). RCM addresses the issue of data imbalance by randomly mixing clips within the same category to create new adventitious lung sounds. EMFE augments specific frequency bands in spectrograms to highlight adventitious features. These contributions enable LungNeXt to achieve outstanding performance. LungNeXt optimally integrates an appropriate number of NeXtblocks, ensuring superior performance and a lightweight model architecture. The proposed RCM and EMFE methods, along with the LungNeXt classification network, have been evaluated on the SPRSound dataset. Experimental results revealed a commendable score of 0.5699 for the lung sound five-category task on SPRSound. Specifically, the LungNeXt model is characterized by its efficiency, with only 3.804M parameters and a computational complexity of 0.659G FLOPS. This lightweight and efficient model is particularly well-suited for applications in electronic stethoscope back-end processing equipment, providing efficient diagnostic advice to physicians and patients.
肺部听诊对于早期发现肺部疾病至关重要。对肺部杂音进行分类需要医学专家的专业辨别。本文详细介绍了 LungNeXt 的特点,这是一种专为肺部声音分析而设计的新型分类模型。此外,我们还提出了两种辅助方法:用于数据增强的 RandClipMix(RCM)和用于特征提取的增强型 Mel-Spectrogram (EMFE)。RCM 通过随机混合同一类别中的片段来创建新的偶然肺音,从而解决了数据不平衡的问题。EMFE 增强了频谱图中的特定频段,以突出偶然特征。这些贡献使 LungNeXt 实现了出色的性能。LungNeXt 优化整合了适当数量的 NeXt 块,确保了卓越的性能和轻量级的模型架构。我们在 SPRSound 数据集上对所提出的 RCM 和 EMFE 方法以及 LungNeXt 分类网络进行了评估。实验结果表明,在 SPRSound 的肺部声音五类任务中取得了 0.5699 的高分。具体来说,LungNeXt 模型的特点是效率高,只有 3.804M 个参数,计算复杂度为 0.659G FLOPS。这种轻便高效的模型尤其适合应用于电子听诊器后端处理设备,为医生和患者提供高效的诊断建议。
{"title":"LungNeXt: A novel lightweight network utilizing enhanced mel-spectrogram for lung sound classification","authors":"Fan Wang ,&nbsp;Xiaochen Yuan ,&nbsp;Yue Liu ,&nbsp;Chan-Tong Lam","doi":"10.1016/j.jksuci.2024.102200","DOIUrl":"10.1016/j.jksuci.2024.102200","url":null,"abstract":"<div><div>Lung auscultation is essential for early lung condition detection. Categorizing adventitious lung sounds requires expert discrimination by medical specialists. This paper details the features of LungNeXt, a novel classification model specifically designed for lung sound analysis. Furthermore, we propose two auxiliary methods: RandClipMix (RCM) for data augmentation and Enhanced Mel-Spectrogram for Feature Extraction (EMFE). RCM addresses the issue of data imbalance by randomly mixing clips within the same category to create new adventitious lung sounds. EMFE augments specific frequency bands in spectrograms to highlight adventitious features. These contributions enable LungNeXt to achieve outstanding performance. LungNeXt optimally integrates an appropriate number of NeXtblocks, ensuring superior performance and a lightweight model architecture. The proposed RCM and EMFE methods, along with the LungNeXt classification network, have been evaluated on the SPRSound dataset. Experimental results revealed a commendable score of 0.5699 for the lung sound five-category task on SPRSound. Specifically, the LungNeXt model is characterized by its efficiency, with only 3.804M parameters and a computational complexity of 0.659G FLOPS. This lightweight and efficient model is particularly well-suited for applications in electronic stethoscope back-end processing equipment, providing efficient diagnostic advice to physicians and patients.</div></div>","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":"36 8","pages":"Article 102200"},"PeriodicalIF":5.2,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142358143","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
High-throughput systolic array-based accelerator for hybrid transformer-CNN networks 基于高通量收缩阵列的混合变压器-网络加速器
IF 5.2 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-10-01 DOI: 10.1016/j.jksuci.2024.102194
Qingzeng Song , Yao Dai , Hao Lu , Guanghao Jin
In this era of Transformers enjoying remarkable success, Convolutional Neural Networks (CNNs) remain highly relevant and useful. Indeed, hybrid Transformer-CNN network architectures, which combine the benefits of both approaches, have achieved impressive results. Vision Transformer (ViT) is a significant neural network architecture that features a convolutional layer as its first layer, primarily built on the transformer framework. However, owing to the distinct computation patterns inherent in attention and convolution, existing hardware accelerators for these two models are typically designed separately and lack a unified approach toward accelerating both models efficiently. In this paper, we present a dedicated accelerator on a field-programmable gate array (FPGA) platform. The accelerator, which integrates a configurable three-dimensional systolic array, is specifically designed to accelerate the inferential capabilities of hybrid Transformer-CNN networks. The Convolution and Transformer computations can be mapped to a systolic array by unifying these operations for matrix multiplication. Softmax and LayerNorm which are frequently used in hybrid Transformer-CNN networks were also implemented on FPGA boards. The accelerator achieved high performance with a peak throughput of 722 GOP/s at an average energy efficiency of 53 GOPS/W. Its respective computation latencies were 51.3 ms, 18.1 ms, and 6.8 ms for ViT-Base, ViT-Small, and ViT-Tiny. The accelerator provided a 12× improvement in energy efficiency compared to the CPU, a 2.3× improvement compared to the GPU, and a 1.5× to 2× improvement compared to existing accelerators regarding speed and energy efficiency.
在变压器取得巨大成功的今天,卷积神经网络(CNN)仍然非常重要和有用。事实上,结合了变形器和 CNN 两种方法优点的混合变形器-CNN 网络架构已经取得了令人瞩目的成果。视觉变换器(ViT)是一种重要的神经网络架构,其第一层为卷积层,主要建立在变换器框架之上。然而,由于注意力和卷积的固有计算模式不同,这两种模型的现有硬件加速器通常是分开设计的,缺乏一种统一的方法来高效地加速这两种模型。在本文中,我们在现场可编程门阵列(FPGA)平台上提出了一种专用加速器。该加速器集成了一个可配置的三维收缩阵列,专门用于加速混合变换器-CNN 网络的推理能力。通过统一矩阵乘法运算,卷积和变换器计算可以映射到合成阵列中。在混合变换器-CNN 网络中经常使用的 Softmax 和 LayerNorm 也在 FPGA 板上实现。加速器实现了高性能,峰值吞吐量为 722 GOP/s,平均能效为 53 GOPS/W。ViT-Base、ViT-Small 和 ViT-Tiny 的计算延迟分别为 51.3 毫秒、18.1 毫秒和 6.8 毫秒。与 CPU 相比,该加速器的能效提高了 12 倍;与 GPU 相比,提高了 2.3 倍;与现有加速器相比,在速度和能效方面提高了 1.5 倍至 2 倍。
{"title":"High-throughput systolic array-based accelerator for hybrid transformer-CNN networks","authors":"Qingzeng Song ,&nbsp;Yao Dai ,&nbsp;Hao Lu ,&nbsp;Guanghao Jin","doi":"10.1016/j.jksuci.2024.102194","DOIUrl":"10.1016/j.jksuci.2024.102194","url":null,"abstract":"<div><div>In this era of Transformers enjoying remarkable success, Convolutional Neural Networks (CNNs) remain highly relevant and useful. Indeed, hybrid Transformer-CNN network architectures, which combine the benefits of both approaches, have achieved impressive results. Vision Transformer (ViT) is a significant neural network architecture that features a convolutional layer as its first layer, primarily built on the transformer framework. However, owing to the distinct computation patterns inherent in attention and convolution, existing hardware accelerators for these two models are typically designed separately and lack a unified approach toward accelerating both models efficiently. In this paper, we present a dedicated accelerator on a field-programmable gate array (FPGA) platform. The accelerator, which integrates a configurable three-dimensional systolic array, is specifically designed to accelerate the inferential capabilities of hybrid Transformer-CNN networks. The Convolution and Transformer computations can be mapped to a systolic array by unifying these operations for matrix multiplication. Softmax and LayerNorm which are frequently used in hybrid Transformer-CNN networks were also implemented on FPGA boards. The accelerator achieved high performance with a peak throughput of 722 GOP/s at an average energy efficiency of 53 GOPS/W. Its respective computation latencies were 51.3 ms, 18.1 ms, and 6.8 ms for ViT-Base, ViT-Small, and ViT-Tiny. The accelerator provided a <span><math><mrow><mn>12</mn><mo>×</mo></mrow></math></span> improvement in energy efficiency compared to the CPU, a <span><math><mrow><mn>2</mn><mo>.</mo><mn>3</mn><mo>×</mo></mrow></math></span> improvement compared to the GPU, and a <span><math><mrow><mn>1</mn><mo>.</mo><mn>5</mn><mo>×</mo></mrow></math></span> to <span><math><mrow><mn>2</mn><mo>×</mo></mrow></math></span> improvement compared to existing accelerators regarding speed and energy efficiency.</div></div>","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":"36 8","pages":"Article 102194"},"PeriodicalIF":5.2,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142358142","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A scalable attention network for lightweight image super-resolution 用于轻量级图像超分辨率的可扩展注意力网络
IF 5.2 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-10-01 DOI: 10.1016/j.jksuci.2024.102185
Jinsheng Fang , Xinyu Chen , Jianglong Zhao , Kun Zeng
Modeling long-range dependencies among features has become a consensus to improve the results of single image super-resolution (SISR), which stimulates interest in enlarging the kernel sizes in convolutional neural networks (CNNs). Although larger kernels definitely improve the network performance, network parameters and computational complexities are raised sharply as well. Hence, an optimization of setting the kernel sizes is required to improve the efficiency of the network. In this work, we study the influence of the positions of larger kernels on the network performance, and propose a scalable attention network (SCAN). In SCAN, we propose a depth-related attention block (DRAB) that consists of several multi-scale information enhancement blocks (MIEBs) and resizable-kernel attention blocks (RKABs). The RKAB dynamically adjusts the kernel size concerning the locations of the DRABs in the network. The resizable mechanism allows the network to extract more informative features in shallower layers with larger kernels and focus on useful information in deeper layers with smaller ones, which effectively improves the SR results. Extensive experiments demonstrate that the proposed SCAN outperforms other state-of-the-art lightweight SR methods. Our codes are available at https://github.com/ginsengf/SCAN.
建立特征之间的长程依赖关系模型已成为改善单图像超分辨率(SISR)结果的共识,这激发了人们对扩大卷积神经网络(CNN)内核大小的兴趣。虽然增大内核肯定会提高网络性能,但网络参数和计算复杂度也会大幅提高。因此,需要对内核大小的设置进行优化,以提高网络的效率。在这项工作中,我们研究了较大内核的位置对网络性能的影响,并提出了一种可扩展的注意力网络(SCAN)。在 SCAN 中,我们提出了一种深度相关注意力块(DRAB),它由多个多尺度信息增强块(MIEB)和可调整大小的内核注意力块(RKAB)组成。RKAB 可根据 DRAB 在网络中的位置动态调整内核大小。这种可调整大小的机制允许网络在较浅的层中用较大的内核提取更多的信息特征,而在较深的层中用较小的内核关注有用的信息,从而有效地改善了 SR 结果。大量实验证明,所提出的 SCAN 优于其他最先进的轻量级 SR 方法。我们的代码见 https://github.com/ginsengf/SCAN。
{"title":"A scalable attention network for lightweight image super-resolution","authors":"Jinsheng Fang ,&nbsp;Xinyu Chen ,&nbsp;Jianglong Zhao ,&nbsp;Kun Zeng","doi":"10.1016/j.jksuci.2024.102185","DOIUrl":"10.1016/j.jksuci.2024.102185","url":null,"abstract":"<div><div>Modeling long-range dependencies among features has become a consensus to improve the results of single image super-resolution (SISR), which stimulates interest in enlarging the kernel sizes in convolutional neural networks (CNNs). Although larger kernels definitely improve the network performance, network parameters and computational complexities are raised sharply as well. Hence, an optimization of setting the kernel sizes is required to improve the efficiency of the network. In this work, we study the influence of the positions of larger kernels on the network performance, and propose a scalable attention network (SCAN). In SCAN, we propose a depth-related attention block (DRAB) that consists of several multi-scale information enhancement blocks (MIEBs) and resizable-kernel attention blocks (RKABs). The RKAB dynamically adjusts the kernel size concerning the locations of the DRABs in the network. The resizable mechanism allows the network to extract more informative features in shallower layers with larger kernels and focus on useful information in deeper layers with smaller ones, which effectively improves the SR results. Extensive experiments demonstrate that the proposed SCAN outperforms other state-of-the-art lightweight SR methods. Our codes are available at <span><span>https://github.com/ginsengf/SCAN</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":"36 8","pages":"Article 102185"},"PeriodicalIF":5.2,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142358141","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing requirements-to-code traceability with GA-XWCoDe: Integrating XGBoost, Node2Vec, and genetic algorithms for improving model performance and stability 利用 GA-XWCoDe 增强从需求到代码的可追溯性:集成 XGBoost、Node2Vec 和遗传算法,提高模型性能和稳定性
IF 5.2 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-10-01 DOI: 10.1016/j.jksuci.2024.102197
Zhiyuan Zou , Bangchao Wang , Xinrong Hu , Yang Deng , Hongyan Wan , Huan Jin
This study addresses the challenge of requirements-to-code traceability by proposing a novel model, Genetic Algorithm-XGBoost With Code Dependency (GA-XWCoDe), which integrates eXtreme Gradient Boosting (XGBoost) with a Node2Vec model-weighted code dependency strategy and genetic algorithms for parameter optimisation. XGBoost mitigates overfitting and enhances model stability, while Node2Vec improves prediction accuracy for low-confidence links. Genetic algorithms are employed to optimise model parameters efficiently, reducing the resource intensity of traditional methods. Experimental results show that GA-XWCoDe outperforms the state-of-the-art method TRAceability lInk cLassifier (TRAIL) by 17.44% and Deep Forest for Requirement traceability (DF4RT) by 33.36% in terms of average F1 performance across four datasets. It is significantly superior to all baseline methods at a confidence level of α¡0.01 and demonstrates exceptional performance and stability across various training data scales.
本研究针对需求到代码的可追溯性所面临的挑战,提出了一种新的模型--代码依赖性遗传算法-XGBoost(GA-XWCoDe),该模型集成了 eXtreme Gradient Boosting(XGBoost)、Node2Vec 模型加权代码依赖性策略和参数优化遗传算法。XGBoost 可减轻过度拟合并增强模型稳定性,而 Node2Vec 则可提高低置信度链接的预测准确性。遗传算法用于有效优化模型参数,降低了传统方法的资源强度。实验结果表明,就四个数据集的平均 F1 性能而言,GA-XWCoDe 比最先进的 TRAceability lInk cLassifier(TRAIL)方法高出 17.44%,比需求可追溯性深林(DF4RT)方法高出 33.36%。在置信度为 α¡0.01 时,它明显优于所有基线方法,并在各种训练数据规模下表现出卓越的性能和稳定性。
{"title":"Enhancing requirements-to-code traceability with GA-XWCoDe: Integrating XGBoost, Node2Vec, and genetic algorithms for improving model performance and stability","authors":"Zhiyuan Zou ,&nbsp;Bangchao Wang ,&nbsp;Xinrong Hu ,&nbsp;Yang Deng ,&nbsp;Hongyan Wan ,&nbsp;Huan Jin","doi":"10.1016/j.jksuci.2024.102197","DOIUrl":"10.1016/j.jksuci.2024.102197","url":null,"abstract":"<div><div>This study addresses the challenge of requirements-to-code traceability by proposing a novel model, Genetic Algorithm-XGBoost With Code Dependency (GA-XWCoDe), which integrates eXtreme Gradient Boosting (XGBoost) with a Node2Vec model-weighted code dependency strategy and genetic algorithms for parameter optimisation. XGBoost mitigates overfitting and enhances model stability, while Node2Vec improves prediction accuracy for low-confidence links. Genetic algorithms are employed to optimise model parameters efficiently, reducing the resource intensity of traditional methods. Experimental results show that GA-XWCoDe outperforms the state-of-the-art method TRAceability lInk cLassifier (TRAIL) by 17.44% and Deep Forest for Requirement traceability (DF4RT) by 33.36% in terms of average F1 performance across four datasets. It is significantly superior to all baseline methods at a confidence level of <span><math><mi>α</mi></math></span>¡0.01 and demonstrates exceptional performance and stability across various training data scales.</div></div>","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":"36 8","pages":"Article 102197"},"PeriodicalIF":5.2,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142358137","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fast and robust JND-guided video watermarking scheme in spatial domain 空间域快速稳健的 JND 引导视频水印方案
IF 5.2 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-09-30 DOI: 10.1016/j.jksuci.2024.102199
Antonio Cedillo-Hernandez , Lydia Velazquez-Garcia , Manuel Cedillo-Hernandez , David Conchouso-Gonzalez
Generally speaking, those watermarking studies using the spatial domain tend to be fast but with limited robustness and imperceptibility while those performed in other transform domains are robust but have high computational cost. Watermarking applied to digital video has as one of the main challenges the large amount of computational power required due to the huge amount of information to be processed. In this paper we propose a watermarking algorithm for digital video that addresses this problem. To increase the speed, the watermark is embedded using a technique to modify the DCT coefficients directly in the spatial domain, in addition to carrying out this process considering the video scene as the basic unit and not the video frame. In terms of robustness, the watermark is modulated by a Just Noticeable Distortion (JND) scheme computed directly in the spatial domain guided by visual attention to increase the strength of the watermark to the maximum level but without this operation being perceivable by human eyes. Experimental results confirm that the proposed method achieves remarkable performance in terms of processing time, robustness and imperceptibility compared to previous studies.
一般来说,使用空间域进行的水印研究往往速度快,但鲁棒性和不可感知性有限,而使用其他变换域进行的水印研究鲁棒性强,但计算成本高。数字视频水印技术面临的主要挑战之一是,由于需要处理的信息量巨大,因此需要大量的计算能力。本文针对这一问题提出了一种数字视频水印算法。为了提高速度,我们采用了一种在空间域直接修改 DCT 系数的技术来嵌入水印,此外,我们还将视频场景而不是视频帧作为基本单位来执行这一过程。在鲁棒性方面,水印是通过直接在空间域计算的 "刚注意到的失真"(JND)方案调制的,该方案以视觉注意力为导向,将水印强度提高到最大水平,但人眼无法感知这一操作。实验结果证实,与之前的研究相比,所提出的方法在处理时间、鲁棒性和不可感知性方面都取得了显著的性能。
{"title":"Fast and robust JND-guided video watermarking scheme in spatial domain","authors":"Antonio Cedillo-Hernandez ,&nbsp;Lydia Velazquez-Garcia ,&nbsp;Manuel Cedillo-Hernandez ,&nbsp;David Conchouso-Gonzalez","doi":"10.1016/j.jksuci.2024.102199","DOIUrl":"10.1016/j.jksuci.2024.102199","url":null,"abstract":"<div><div>Generally speaking, those watermarking studies using the spatial domain tend to be fast but with limited robustness and imperceptibility while those performed in other transform domains are robust but have high computational cost. Watermarking applied to digital video has as one of the main challenges the large amount of computational power required due to the huge amount of information to be processed. In this paper we propose a watermarking algorithm for digital video that addresses this problem. To increase the speed, the watermark is embedded using a technique to modify the DCT coefficients directly in the spatial domain, in addition to carrying out this process considering the video scene as the basic unit and not the video frame. In terms of robustness, the watermark is modulated by a Just Noticeable Distortion (JND) scheme computed directly in the spatial domain guided by visual attention to increase the strength of the watermark to the maximum level but without this operation being perceivable by human eyes. Experimental results confirm that the proposed method achieves remarkable performance in terms of processing time, robustness and imperceptibility compared to previous studies.</div></div>","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":"36 9","pages":"Article 102199"},"PeriodicalIF":5.2,"publicationDate":"2024-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142424439","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of King Saud University-Computer and Information Sciences
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1