首页 > 最新文献

Journal of King Saud University-Computer and Information Sciences最新文献

英文 中文
Heterogeneous network link prediction based on network schema and cross-neighborhood attention 基于网络模式和交叉邻域关注的异构网络链接预测
IF 5.2 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-08-06 DOI: 10.1016/j.jksuci.2024.102154

Heterogeneous network link prediction is a hot topic in the analysis of networks. It aims to predict missing links in the network by utilizing the rich semantic information present in the heterogeneous network, thereby enhancing the effectiveness of relevant data mining tasks. Existing heterogeneous network link prediction methods utilize meta-paths or meta-graphs to extract semantic information, heavily relying on the priori knowledge. This paper proposes a heterogeneous network link prediction based on network schema and cross-neighborhood attention method (HNLP-NSCA). The heterogeneous node features are projected into a shared latent vector space using fully connected layers. To resolve the issue of prior knowledge dependence on meta-path, the semantic information is extracted by using network schema structures uniquely in heterogeneous networks. Node features are extracted based on the relevant network schema instances, avoiding the problem of meta-path selection. The neighborhood interaction information of input node pairs is sensed via cross-neighborhood attention, strengthening the nonlinear mapping capability of the link prediction. The resulting cross-neighborhood interaction vectors are combined with the node feature vectors and fed into a multilayer perceptron for link prediction. Experimental results on four real-world datasets demonstrate that the proposed HNLP-NSCA mothed outperforms the baseline models.

异构网络链接预测是网络分析领域的一个热门话题。它旨在利用异构网络中丰富的语义信息预测网络中缺失的链接,从而提高相关数据挖掘任务的效率。现有的异构网络链接预测方法利用元路径或元图提取语义信息,严重依赖先验知识。本文提出了一种基于网络模式和交叉邻域关注法(HNLP-NSCA)的异构网络链接预测方法。利用全连接层将异构节点特征投射到共享的潜在向量空间。为解决元路径的先验知识依赖问题,在异构网络中使用网络模式结构提取语义信息。根据相关的网络模式实例提取节点特征,避免了元路径选择问题。通过交叉邻域关注感知输入节点对的邻域交互信息,加强了链接预测的非线性映射能力。由此产生的交叉邻域交互向量与节点特征向量相结合,并输入多层感知器进行链接预测。在四个实际数据集上的实验结果表明,所提出的 HNLP-NSCA 模型优于基线模型。
{"title":"Heterogeneous network link prediction based on network schema and cross-neighborhood attention","authors":"","doi":"10.1016/j.jksuci.2024.102154","DOIUrl":"10.1016/j.jksuci.2024.102154","url":null,"abstract":"<div><p>Heterogeneous network link prediction is a hot topic in the analysis of networks. It aims to predict missing links in the network by utilizing the rich semantic information present in the heterogeneous network, thereby enhancing the effectiveness of relevant data mining tasks. Existing heterogeneous network link prediction methods utilize meta-paths or meta-graphs to extract semantic information, heavily relying on the priori knowledge. This paper proposes a heterogeneous network link prediction based on network schema and cross-neighborhood attention method (HNLP-NSCA). The heterogeneous node features are projected into a shared latent vector space using fully connected layers. To resolve the issue of prior knowledge dependence on meta-path, the semantic information is extracted by using network schema structures uniquely in heterogeneous networks. Node features are extracted based on the relevant network schema instances, avoiding the problem of meta-path selection. The neighborhood interaction information of input node pairs is sensed via cross-neighborhood attention, strengthening the nonlinear mapping capability of the link prediction. The resulting cross-neighborhood interaction vectors are combined with the node feature vectors and fed into a multilayer perceptron for link prediction. Experimental results on four real-world datasets demonstrate that the proposed HNLP-NSCA mothed outperforms the baseline models.</p></div>","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":null,"pages":null},"PeriodicalIF":5.2,"publicationDate":"2024-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S131915782400243X/pdfft?md5=269ef08ce93e8cf6ae0df3df90173eac&pid=1-s2.0-S131915782400243X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141979570","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Spatial relaxation transformer for image super-resolution 用于图像超分辨率的空间松弛变换器
IF 5.2 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-08-06 DOI: 10.1016/j.jksuci.2024.102150

Transformer-based approaches have demonstrated remarkable performance in image processing tasks due to their ability to model long-range dependencies. Current mainstream Transformer-based methods typically confine self-attention computation within windows to reduce computational burden. However, this constraint may lead to grid artifacts in the reconstructed images due to insufficient cross-window information exchange, particularly in image super-resolution tasks. To address this issue, we propose the Multi-Scale Texture Complementation Block based on Spatial Relaxation Transformer (MSRT), which leverages features at multiple scales and augments information exchange through cross windows attention computation. In addition, we introduce a loss function based on the prior of texture smoothness transformation, which utilizes the continuity of textures between patches to constrain the generation of more coherent texture information in the reconstructed images. Specifically, we employ learnable compressive sensing technology to extract shallow features from images, preserving image features while reducing feature dimensions and improving computational efficiency. Extensive experiments conducted on multiple benchmark datasets demonstrate that our method outperforms previous state-of-the-art approaches in both qualitative and quantitative evaluations.

基于变换器的方法由于能够模拟长距离依赖关系,因此在图像处理任务中表现出卓越的性能。目前基于变换器的主流方法通常将自注意计算限制在窗口内,以减轻计算负担。然而,由于跨窗口信息交换不足,这种限制可能会导致重建图像中出现网格伪影,尤其是在图像超分辨率任务中。为了解决这个问题,我们提出了基于空间松弛变换器(MSRT)的多尺度纹理补全块,它利用了多个尺度的特征,并通过跨窗注意力计算增强了信息交换。此外,我们还引入了基于纹理平滑度变换先验的损失函数,该函数利用斑块间纹理的连续性来限制在重建图像中生成更连贯的纹理信息。具体来说,我们采用可学习的压缩传感技术从图像中提取浅层特征,在保留图像特征的同时减少特征维数并提高计算效率。在多个基准数据集上进行的广泛实验表明,我们的方法在定性和定量评估方面都优于之前的先进方法。
{"title":"Spatial relaxation transformer for image super-resolution","authors":"","doi":"10.1016/j.jksuci.2024.102150","DOIUrl":"10.1016/j.jksuci.2024.102150","url":null,"abstract":"<div><p>Transformer-based approaches have demonstrated remarkable performance in image processing tasks due to their ability to model long-range dependencies. Current mainstream Transformer-based methods typically confine self-attention computation within windows to reduce computational burden. However, this constraint may lead to grid artifacts in the reconstructed images due to insufficient cross-window information exchange, particularly in image super-resolution tasks. To address this issue, we propose the Multi-Scale Texture Complementation Block based on Spatial Relaxation Transformer (MSRT), which leverages features at multiple scales and augments information exchange through cross windows attention computation. In addition, we introduce a loss function based on the prior of texture smoothness transformation, which utilizes the continuity of textures between patches to constrain the generation of more coherent texture information in the reconstructed images. Specifically, we employ learnable compressive sensing technology to extract shallow features from images, preserving image features while reducing feature dimensions and improving computational efficiency. Extensive experiments conducted on multiple benchmark datasets demonstrate that our method outperforms previous state-of-the-art approaches in both qualitative and quantitative evaluations.</p></div>","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":null,"pages":null},"PeriodicalIF":5.2,"publicationDate":"2024-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1319157824002398/pdfft?md5=0a1496797663e5c523b9ebe20a3e23aa&pid=1-s2.0-S1319157824002398-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141963535","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
RSA-RRT: A path planning algorithm based on restricted sampling area RSA-RRT:基于受限采样区域的路径规划算法
IF 5.2 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-08-06 DOI: 10.1016/j.jksuci.2024.102152

The rapidly-exploring random tree (RRT) algorithm has a wide range of applications in the field of path planning. However, conventional RRT algorithm suffers from low planning efficiency and long path length, making it unable to handle complex environments. In response to the above problems, this paper proposes an improved RRT algorithm based on restricted sampling area (RSA-RRT). Firstly, to address the problem of low efficiency, a restricted sampling area strategy is proposed. By dynamically restricting the sampling area, the number of invalid sampling points is reduced, thus improving planning efficiency. Then, for the path planning problem in narrow areas, a fixed-angle sampling strategy is proposed, which improves the planning efficiency in narrow areas by conducting larger step size sampling with a fixed angle. Finally, a multi-triangle optimization strategy is proposed to address the problem of longer and more tortuous paths. The effectiveness of RSA-RRT algorithm is verified through improved strategy performance verification and ablation experiments. Comparing with other algorithms in different environments, the results show that RSA-RRT algorithm can obtain shorter paths while taking less time, effectively balancing the path quality and planning speed, and it can be applied in complex real-world environments.

快速探索随机树(RRT)算法在路径规划领域有着广泛的应用。然而,传统的 RRT 算法存在规划效率低、路径长度长等问题,无法应对复杂环境。针对上述问题,本文提出了一种基于受限采样区域的改进 RRT 算法(RSA-RRT)。首先,针对效率低的问题,提出了限制采样区域策略。通过动态限制采样区域,减少无效采样点的数量,从而提高规划效率。然后,针对狭窄区域的路径规划问题,提出了固定角度采样策略,通过在固定角度下进行较大步长的采样,提高了狭窄区域的规划效率。最后,针对较长和较曲折的路径问题,提出了多三角优化策略。通过改进的策略性能验证和消融实验,验证了 RSA-RRT 算法的有效性。与其他算法在不同环境下的对比结果表明,RSA-RRT 算法可以在耗时更短的情况下获得更短的路径,有效平衡了路径质量和规划速度,可以应用于复杂的实际环境。
{"title":"RSA-RRT: A path planning algorithm based on restricted sampling area","authors":"","doi":"10.1016/j.jksuci.2024.102152","DOIUrl":"10.1016/j.jksuci.2024.102152","url":null,"abstract":"<div><p>The rapidly-exploring random tree (RRT) algorithm has a wide range of applications in the field of path planning. However, conventional RRT algorithm suffers from low planning efficiency and long path length, making it unable to handle complex environments. In response to the above problems, this paper proposes an improved RRT algorithm based on restricted sampling area (RSA-RRT). Firstly, to address the problem of low efficiency, a restricted sampling area strategy is proposed. By dynamically restricting the sampling area, the number of invalid sampling points is reduced, thus improving planning efficiency. Then, for the path planning problem in narrow areas, a fixed-angle sampling strategy is proposed, which improves the planning efficiency in narrow areas by conducting larger step size sampling with a fixed angle. Finally, a multi-triangle optimization strategy is proposed to address the problem of longer and more tortuous paths. The effectiveness of RSA-RRT algorithm is verified through improved strategy performance verification and ablation experiments. Comparing with other algorithms in different environments, the results show that RSA-RRT algorithm can obtain shorter paths while taking less time, effectively balancing the path quality and planning speed, and it can be applied in complex real-world environments.</p></div>","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":null,"pages":null},"PeriodicalIF":5.2,"publicationDate":"2024-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1319157824002416/pdfft?md5=f00e2115fc7ea409ec90daa76b1a079f&pid=1-s2.0-S1319157824002416-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142083683","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adaptive feature selection and optimized multiple histogram construction for reversible data hiding 用于可逆数据隐藏的自适应特征选择和优化多重直方图构建
IF 5.2 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-08-03 DOI: 10.1016/j.jksuci.2024.102149

Reversible data hiding (RDH) algorithms have been extensively employed in the field of copyright protection and information dissemination. Among various RDH algorithms, the multiple histogram modification (MHM) algorithm has attracted significant attention because of its capability to generate high-quality marked images. In previous MHM methods, the prediction error histograms were mostly generated in a fixed way. Recently, clustering algorithms automatically classify prediction errors into multiple classes, which enhances the similarity among prediction errors within the same class. However, the design of features and the determination of clustering numbers are crucial in clustering algorithms. Traditional algorithms utilize the same features and fix the number of clusters (e.g., empirically generate 16 classes), which may limit the performance due to the lack of adaptivity. To address these limitations, an adaptive initial feature selection scheme and a clustering number optimization scheme based on the Fuzzy C-Means (FCM) clustering algorithm are proposed in this paper. The superiority of the proposed scheme over other state-of-the-art schemes is verified by experimental results.

可逆数据隐藏(RDH)算法已被广泛应用于版权保护和信息传播领域。在各种可逆数据隐藏算法中,多重直方图修正(MHM)算法因其能够生成高质量的标记图像而备受关注。在以往的 MHM 方法中,预测误差直方图大多是以固定方式生成的。最近,聚类算法自动将预测误差分为多个类别,从而增强了同一类别中预测误差的相似性。然而,特征的设计和聚类数量的确定在聚类算法中至关重要。传统算法使用相同的特征并固定聚类数量(如根据经验生成 16 个类别),这可能会因缺乏适应性而限制性能。针对这些局限性,本文提出了基于模糊 C-Means 聚类算法的自适应初始特征选择方案和聚类数量优化方案。实验结果验证了所提出的方案优于其他最先进的方案。
{"title":"Adaptive feature selection and optimized multiple histogram construction for reversible data hiding","authors":"","doi":"10.1016/j.jksuci.2024.102149","DOIUrl":"10.1016/j.jksuci.2024.102149","url":null,"abstract":"<div><p>Reversible data hiding (RDH) algorithms have been extensively employed in the field of copyright protection and information dissemination. Among various RDH algorithms, the multiple histogram modification (MHM) algorithm has attracted significant attention because of its capability to generate high-quality marked images. In previous MHM methods, the prediction error histograms were mostly generated in a fixed way. Recently, clustering algorithms automatically classify prediction errors into multiple classes, which enhances the similarity among prediction errors within the same class. However, the design of features and the determination of clustering numbers are crucial in clustering algorithms. Traditional algorithms utilize the same features and fix the number of clusters (e.g., empirically generate 16 classes), which may limit the performance due to the lack of adaptivity. To address these limitations, an adaptive initial feature selection scheme and a clustering number optimization scheme based on the Fuzzy C-Means (FCM) clustering algorithm are proposed in this paper. The superiority of the proposed scheme over other state-of-the-art schemes is verified by experimental results.</p></div>","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":null,"pages":null},"PeriodicalIF":5.2,"publicationDate":"2024-08-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1319157824002386/pdfft?md5=30dcd27b69628fb1e991cecf20b986b1&pid=1-s2.0-S1319157824002386-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141953059","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IBPF-RRT*: An improved path planning algorithm with Ultra-low number of iterations and stabilized optimal path quality IBPF-RRT*:超低迭代次数和稳定最佳路径质量的改进型路径规划算法
IF 5.2 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-08-02 DOI: 10.1016/j.jksuci.2024.102146

Due to its asymptotic optimality, the Rapidly-exploring Random Tree star (RRT*) algorithm is widely used for robotic operations in complex environments. However, the RRT* algorithm suffers from poor path quality, slow convergence, and unstable generation of high-quality paths in the path planning process. This paper proposes an Improved Bi-Tree Obstacle Edge Search Artificial Potential Field RRT* algorithm (IBPF-RRT*) to address these issues. First, based on the RRT* algorithm, this paper proposes a new obstacle edge search artificial potential field strategy (ESAPF), which speeds up the path search and improves the path quality simultaneously. Second, a bi-directional pruning strategy is designed to optimize the bi-directional search tree branch nodes and combine the bi-directional search strategy to reduce the number of iterations for convergence speed significantly. Third, a novel path optimization strategy is proposed, which enables high-quality paths to be generated stably by creating an entirely new node between two path nodes and then optimizing the paths using a pruning strategy based on triangular inequalities. Experimental results in three different scenarios show that the proposed IBPF-RRT* algorithm outperforms other methods in terms of optimal path quality, algorithm stability, and the number of iterations when compared to the RRT*, Q-RRT*, PQ-RRT*, F-RRT* and CCPF-RRT* algorithms, and proves the effectiveness of the proposed three strategies.

快速探索随机树星(RRT*)算法具有渐进最优性,因此被广泛用于复杂环境中的机器人操作。然而,RRT* 算法在路径规划过程中存在路径质量差、收敛速度慢、高质量路径生成不稳定等问题。本文针对这些问题提出了一种改进的双树障碍物边缘搜索人工势场 RRT* 算法(IBPF-RRT*)。首先,本文在 RRT* 算法的基础上,提出了一种新的障碍物边缘搜索人工势场策略(ESAPF),在加快路径搜索速度的同时提高了路径质量。其次,设计了一种双向剪枝策略,优化双向搜索树分支节点,并结合双向搜索策略大幅减少收敛速度的迭代次数。第三,提出了一种新颖的路径优化策略,通过在两个路径节点之间创建一个全新节点,然后使用基于三角不等式的剪枝策略优化路径,从而稳定生成高质量路径。三种不同场景的实验结果表明,与 RRT*、Q-RRT*、PQ-RRT*、F-RRT* 和 CCPF-RRT* 算法相比,所提出的 IBPF-RRT* 算法在最优路径质量、算法稳定性和迭代次数方面都优于其他方法,证明了所提出的三种策略的有效性。
{"title":"IBPF-RRT*: An improved path planning algorithm with Ultra-low number of iterations and stabilized optimal path quality","authors":"","doi":"10.1016/j.jksuci.2024.102146","DOIUrl":"10.1016/j.jksuci.2024.102146","url":null,"abstract":"<div><p>Due to its asymptotic optimality, the Rapidly-exploring Random Tree star (RRT*) algorithm is widely used for robotic operations in complex environments. However, the RRT* algorithm suffers from poor path quality, slow convergence, and unstable generation of high-quality paths in the path planning process. This paper proposes an Improved Bi-Tree Obstacle Edge Search Artificial Potential Field RRT* algorithm (IBPF-RRT*) to address these issues. First, based on the RRT* algorithm, this paper proposes a new obstacle edge search artificial potential field strategy (ESAPF), which speeds up the path search and improves the path quality simultaneously. Second, a bi-directional pruning strategy is designed to optimize the bi-directional search tree branch nodes and combine the bi-directional search strategy to reduce the number of iterations for convergence speed significantly. Third, a novel path optimization strategy is proposed, which enables high-quality paths to be generated stably by creating an entirely new node between two path nodes and then optimizing the paths using a pruning strategy based on triangular inequalities. Experimental results in three different scenarios show that the proposed IBPF-RRT* algorithm outperforms other methods in terms of optimal path quality, algorithm stability, and the number of iterations when compared to the RRT*, Q-RRT*, PQ-RRT*, F-RRT* and CCPF-RRT* algorithms, and proves the effectiveness of the proposed three strategies.</p></div>","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":null,"pages":null},"PeriodicalIF":5.2,"publicationDate":"2024-08-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1319157824002350/pdfft?md5=5a50e8f318b478ea8f87375c2c517352&pid=1-s2.0-S1319157824002350-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141962798","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A trust enhancement model based on distributed learning and blockchain in service ecosystems 服务生态系统中基于分布式学习和区块链的信任增强模型
IF 5.2 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-07-30 DOI: 10.1016/j.jksuci.2024.102147

In a service ecosystem, the trust of users in services serves as the foundation for maintaining normal interactions among users, service providers, and platforms. However, malicious attacks can tamper with the trust value of these services, making it difficult for users to identify reliable services and undermining the benefits of reliable service providers and platforms. When existing trust management models address the impact of malicious attacks on service reliability, they rarely consider leveraging different attack targets to improve the accuracy of compromised service trust. Therefore, we propose a trust enhancement model based on distributed learning and blockchain in the service ecosystem, which adaptively enhances the trust values of compromised services according to the targets of anomalous attacks. Firstly, we conduct a comprehensive analysis of the targets of malicious attacks using distributed learning. Secondly, we introduced a trust enhancement contract that utilizes different methods to enhance the trust of the service based on various attack targets. Finally, our approach outperforms the baseline method significantly. For different attack targets, we observe a reduction in RMSE by 12.38% and 12.12%, respectively, and an enhancement in coverage by 24.94% and 14.56%, respectively. The experimental results show the reliability and efficacy of our proposed model.

在服务生态系统中,用户对服务的信任是维持用户、服务提供商和平台之间正常互动的基础。然而,恶意攻击会篡改这些服务的信任值,使用户难以识别可靠的服务,并损害可靠的服务提供商和平台的利益。现有的信任管理模型在解决恶意攻击对服务可靠性的影响时,很少考虑利用不同的攻击目标来提高受损服务信任的准确性。因此,我们提出了一种基于分布式学习和区块链的服务生态系统信任增强模型,该模型可根据异常攻击目标自适应地增强受损服务的信任值。首先,我们利用分布式学习对恶意攻击目标进行了全面分析。其次,我们引入了信任增强合约,根据不同的攻击目标利用不同的方法增强服务的信任度。最后,我们的方法明显优于基线方法。对于不同的攻击目标,我们观察到 RMSE 分别降低了 12.38% 和 12.12%,覆盖率分别提高了 24.94% 和 14.56%。实验结果表明了我们提出的模型的可靠性和有效性。
{"title":"A trust enhancement model based on distributed learning and blockchain in service ecosystems","authors":"","doi":"10.1016/j.jksuci.2024.102147","DOIUrl":"10.1016/j.jksuci.2024.102147","url":null,"abstract":"<div><p>In a service ecosystem, the trust of users in services serves as the foundation for maintaining normal interactions among users, service providers, and platforms. However, malicious attacks can tamper with the trust value of these services, making it difficult for users to identify reliable services and undermining the benefits of reliable service providers and platforms. When existing trust management models address the impact of malicious attacks on service reliability, they rarely consider leveraging different attack targets to improve the accuracy of compromised service trust. Therefore, we propose a trust enhancement model based on distributed learning and blockchain in the service ecosystem, which adaptively enhances the trust values of compromised services according to the targets of anomalous attacks. Firstly, we conduct a comprehensive analysis of the targets of malicious attacks using distributed learning. Secondly, we introduced a trust enhancement contract that utilizes different methods to enhance the trust of the service based on various attack targets. Finally, our approach outperforms the baseline method significantly. For different attack targets, we observe a reduction in RMSE by 12.38% and 12.12%, respectively, and an enhancement in coverage by 24.94% and 14.56%, respectively. The experimental results show the reliability and efficacy of our proposed model.</p></div>","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":null,"pages":null},"PeriodicalIF":5.2,"publicationDate":"2024-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1319157824002362/pdfft?md5=dacd97c00d622b06f8f9a6dd3d5427f0&pid=1-s2.0-S1319157824002362-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141962797","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Lumbar intervertebral disc detection and classification with novel deep learning models 利用新型深度学习模型进行腰椎间盘检测和分类
IF 5.2 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-07-30 DOI: 10.1016/j.jksuci.2024.102148

Low back pain (LBP) is a prevalent spinal issue, affecting eight out of ten individuals. Notably, lumbar intervertebral disc (IVD) abnormalities frequently contribute to LBP. To diagnose LBP, Magnetic Resonance Imaging (MRI) is crucial for obtaining detailed spinal images. This paper employs deep learning (DL) to detect and locate lumbar IVD in sagittal MR images. It further classifies lumbar IVDs as healthy or herniated, utilizing both novel convolutional neural network (CNN) and conventional CNN models. The dataset utilized comprises MR images from 32 patients, with 10 exhibiting healthy discs and the remaining 22 posing a mix of healthy and herniated discs, totaling 160 lumbar discs, incorporating 112 healthy and 48 herniated discs. In this study, ResNet-50 architecture in the Novel Lumbar IVD detection (NLID) model served as the feature extractor to segment the five lumbar IVDs from MR images. The features extracted from ResNet-50 were input into YOLOv2 for the identification of the region of interest (ROI). The findings indicate that optimal performance was achieved at the 22nd Rectified Linear Unit (ReLU) activation layer, boasting a remarkable 99.59% average precision, 97.22% F1-score, 94.59% precision, and a perfect 100% recall. This commendable performance consistently held above the 85% threshold until the 22nd ReLU activation layer. Regarding imbalanced dataset classification, AlexNet emerged as the frontrunner among other pre-trained networks, boasting the highest test accuracy of 90.63%, and an impressive F1 score of 88.77%. Meanwhile, the Novel Lumbar IVD Classification (NLIC) model achieved superior results with 93.75% test accuracy, and 92.27% F1-score. In the setting of the balanced dataset, NLIC achieved 96.88% test accuracy, and 96.46% F1-score with fewer epochs compared to AlexNet, affirming the robustness of the novel trained-from-scratch network. These findings distinctly underscore the effectiveness of CNNs in both medical image segmentation and classification.

腰背痛(LBP)是一种常见的脊柱问题,每十个人中就有八个人会受到影响。值得注意的是,腰椎间盘(IVD)异常经常导致腰背痛。要诊断腰背痛,磁共振成像(MRI)对于获取详细的脊柱图像至关重要。本文采用深度学习(DL)技术检测和定位矢状面磁共振图像中的腰椎间盘突出症。它还利用新型卷积神经网络(CNN)和传统 CNN 模型,将腰椎间盘进一步分类为健康或疝气。所使用的数据集包括 32 名患者的 MR 图像,其中 10 名患者的椎间盘健康,其余 22 名患者的椎间盘健康和突出,共计 160 个腰椎间盘,包括 112 个健康椎间盘和 48 个突出椎间盘。在本研究中,新型腰椎间盘突出症检测(NLID)模型中的 ResNet-50 架构作为特征提取器,从磁共振图像中分割出五个腰椎间盘突出症。从 ResNet-50 提取的特征被输入 YOLOv2,用于识别感兴趣区(ROI)。研究结果表明,第 22 个整流线性单元(ReLU)激活层实现了最佳性能,平均精确度达到 99.59%,F1 分数达到 97.22%,精确度达到 94.59%,召回率达到 100%。在第 22 个 ReLU 激活层之前,这一值得称赞的性能一直保持在 85% 的阈值以上。在不平衡数据集分类方面,AlexNet 是其他预训练网络中的佼佼者,测试准确率最高,达到 90.63%,F1 分数高达 88.77%,令人印象深刻。同时,新型腰椎间盘突出症分类(NLIC)模型也取得了优异的成绩,测试准确率为 93.75%,F1 分数为 92.27%。在平衡数据集的环境中,NLIC 的测试准确率达到了 96.88%,F1 分数达到了 96.46%,与 AlexNet 相比,NLIC 的历时更短,这肯定了从零开始训练的新型网络的鲁棒性。这些发现清楚地表明了 CNN 在医学图像分割和分类中的有效性。
{"title":"Lumbar intervertebral disc detection and classification with novel deep learning models","authors":"","doi":"10.1016/j.jksuci.2024.102148","DOIUrl":"10.1016/j.jksuci.2024.102148","url":null,"abstract":"<div><p>Low back pain (LBP) is a prevalent spinal issue, affecting eight out of ten individuals. Notably, lumbar intervertebral disc (IVD) abnormalities frequently contribute to LBP. To diagnose LBP, Magnetic Resonance Imaging (MRI) is crucial for obtaining detailed spinal images. This paper employs deep learning (DL) to detect and locate lumbar IVD in sagittal MR images. It further classifies lumbar IVDs as healthy or herniated, utilizing both novel convolutional neural network (CNN) and conventional CNN models. The dataset utilized comprises MR images from 32 patients, with 10 exhibiting healthy discs and the remaining 22 posing a mix of healthy and herniated discs, totaling 160 lumbar discs, incorporating 112 healthy and 48 herniated discs. In this study, ResNet-50 architecture in the Novel Lumbar IVD detection (NLID) model served as the feature extractor to segment the five lumbar IVDs from MR images. The features extracted from ResNet-50 were input into YOLOv2 for the identification of the region of interest (ROI). The findings indicate that optimal performance was achieved at the 22nd Rectified Linear Unit (ReLU) activation layer, boasting a remarkable 99.59% average precision, 97.22% F1-score, 94.59% precision, and a perfect 100% recall. This commendable performance consistently held above the 85% threshold until the 22nd ReLU activation layer. Regarding imbalanced dataset classification, AlexNet emerged as the frontrunner among other pre-trained networks, boasting the highest test accuracy of 90.63%, and an impressive F1 score of 88.77%. Meanwhile, the Novel Lumbar IVD Classification (NLIC) model achieved superior results with 93.75% test accuracy, and 92.27% F1-score. In the setting of the balanced dataset, NLIC achieved 96.88% test accuracy, and 96.46% F1-score with fewer epochs compared to AlexNet, affirming the robustness of the novel trained-from-scratch network. These findings distinctly underscore the effectiveness of CNNs in both medical image segmentation and classification.</p></div>","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":null,"pages":null},"PeriodicalIF":5.2,"publicationDate":"2024-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1319157824002374/pdfft?md5=242b16e1864b249a5d6a3f20dfd70a71&pid=1-s2.0-S1319157824002374-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141961047","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Planning the development of text-to-speech synthesis models and datasets with dynamic deep learning 利用动态深度学习规划文本到语音合成模型和数据集的开发
IF 5.2 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-07-26 DOI: 10.1016/j.jksuci.2024.102131

Synthesis of Text-to-speech (TTS) is a process that involves translating a natural language text into a speech. Speech synthesisers face a major challenge when recognizing the prosodic elements of written text, such as intonation (the rise and fall of the voice in speaking), and length. In contrast, continuous speech features are influenced by the personality and emotions of the artist. A database is maintained to store the synthesized speech pieces. Its output is determined by how similar the person utters the words and how capable they are of being implied. In the past few years, the field of text-to-speech synthesis has been heavily impacted by the emergence of deep learning, an AI technology that has gained widespread popularity. This review paper presents a taxonomy of models and architectures that are based on deep learning and discusses the various datasets that are utilised in the TTS process. It also covers the evaluation matrices that are commonly used. The paper ends with a look at the future directions of the system and reaches to some Deep learning models that give promising results in this field.

文本到语音合成(TTS)是一个将自然语言文本翻译成语音的过程。语音合成人员在识别书面文本的前音要素(如语调(说话时声音的起伏)和长度)时面临着巨大挑战。相比之下,连续语音特征会受到艺术家个性和情感的影响。合成语音片段会保存在一个数据库中。它的输出取决于发音人发音的相似程度以及发音人的暗示能力。在过去几年中,深度学习这一人工智能技术的出现对文本到语音合成领域产生了巨大影响,并得到了广泛普及。本综述论文介绍了基于深度学习的模型和架构分类法,并讨论了在 TTS 过程中使用的各种数据集。本文还介绍了常用的评估矩阵。文章最后展望了系统的未来发展方向,并介绍了一些在该领域取得良好效果的深度学习模型。
{"title":"Planning the development of text-to-speech synthesis models and datasets with dynamic deep learning","authors":"","doi":"10.1016/j.jksuci.2024.102131","DOIUrl":"10.1016/j.jksuci.2024.102131","url":null,"abstract":"<div><p>Synthesis of Text-to-speech (TTS) is a process that involves translating a natural language text into a speech. Speech synthesisers face a major challenge when recognizing the prosodic elements of written text, such as intonation (the rise and fall of the voice in speaking), and length. In contrast, continuous speech features are influenced by the personality and emotions of the artist. A database is maintained to store the synthesized speech pieces. Its output is determined by how similar the person utters the words and how capable they are of being implied. In the past few years, the field of text-to-speech synthesis has been heavily impacted by the emergence of deep learning, an AI technology that has gained widespread popularity. This review paper presents a taxonomy of models and architectures that are based on deep learning and discusses the various datasets that are utilised in the TTS process. It also covers the evaluation matrices that are commonly used. The paper ends with a look at the future directions of the system and reaches to some Deep learning models that give promising results in this field.</p></div>","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":null,"pages":null},"PeriodicalIF":5.2,"publicationDate":"2024-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1319157824002209/pdfft?md5=73c94f11cbc25ec7eb6841c1af93654a&pid=1-s2.0-S1319157824002209-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141844302","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SWFormer: A scale-wise hybrid CNN-Transformer network for multi-classes weed segmentation SWFormer:用于多类杂草分割的规模化混合 CNN-Transformer 网络
IF 5.2 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-07-26 DOI: 10.1016/j.jksuci.2024.102144

Weeds in rapeseed field are an important factor in crop yield reduction and economic loss. Thus, Precision Agriculture is an important task for sustainable agriculture and weed management. At present, deep learning techniques have shown great potential for image-based detection and classification in various crops and weeds. However, the inherent limitations of traditional convolutional neural networks pose significant challenges due to the locally similarity of weeds and crops in color, shape and texture. To address this issue, we introduce SWFormer, a scale-wise hybrid CNN-Transformer network. SWFormer leverages the distinct strengths of both convolutional and transformer architectures. Convolutional structures excel at extracting short-range dependency information among pixels, whereas transformer structures are adept at capturing global dependency relationships. Additionally, we propose two innovative modules. Firstly, the Scale-wise Cascade Convolution (SWCC) module is designed to capture multiscale features and expand the receptive field. Secondly, the Adaptive Semantic Aggregation (ASA) module facilitates adaptive and effective information fusion across two distinct feature maps. Our experiments were conducted on the publicly available cropandweed dataset and SB20 dataset. it yields improved performance over other mainstream segmentation models. Specifically, SWFormer with 52.33M/527.51GFLOPs achieves an mAP of 76.54% and an accuracy of 83.95% on the cropandweed dataset. For the SB20 dataset, it attains an mAP of 61.24% and an accuracy of 79.47%. Overall, the evaluation clearly demonstrates our proposed SWFormer is conducive to promoting further research in the area of Precision Agriculture.

油菜田中的杂草是造成作物减产和经济损失的重要因素。因此,精准农业是可持续农业和杂草管理的一项重要任务。目前,深度学习技术在基于图像的各种作物和杂草检测与分类方面已显示出巨大潜力。然而,由于杂草和农作物在颜色、形状和纹理上的局部相似性,传统卷积神经网络的固有局限性带来了巨大挑战。为解决这一问题,我们引入了 SWFormer,这是一种按比例混合的 CNN-Transformer 网络。SWFormer 充分利用了卷积和变换器架构的独特优势。卷积结构擅长提取像素间的短程依赖信息,而变换器结构则善于捕捉全局依赖关系。此外,我们还提出了两个创新模块。首先,规模级联卷积(SWCC)模块旨在捕捉多尺度特征并扩大感受野。其次,自适应语义聚合(ASA)模块有助于在两个不同的特征图之间进行自适应和有效的信息融合。我们在公开的 cropandweed 数据集和 SB20 数据集上进行了实验。具体来说,使用 52.33M/527.51GFLOPs 的 SWFormer 在 cropandweed 数据集上实现了 76.54% 的 mAP 和 83.95% 的准确率。在 SB20 数据集上,其 mAP 为 61.24%,准确率为 79.47%。总之,评估结果清楚地表明,我们提出的 SWFormer 有助于促进精准农业领域的进一步研究。
{"title":"SWFormer: A scale-wise hybrid CNN-Transformer network for multi-classes weed segmentation","authors":"","doi":"10.1016/j.jksuci.2024.102144","DOIUrl":"10.1016/j.jksuci.2024.102144","url":null,"abstract":"<div><p>Weeds in rapeseed field are an important factor in crop yield reduction and economic loss. Thus, Precision Agriculture is an important task for sustainable agriculture and weed management. At present, deep learning techniques have shown great potential for image-based detection and classification in various crops and weeds. However, the inherent limitations of traditional convolutional neural networks pose significant challenges due to the locally similarity of weeds and crops in color, shape and texture. To address this issue, we introduce SWFormer, a scale-wise hybrid CNN-Transformer network. SWFormer leverages the distinct strengths of both convolutional and transformer architectures. Convolutional structures excel at extracting short-range dependency information among pixels, whereas transformer structures are adept at capturing global dependency relationships. Additionally, we propose two innovative modules. Firstly, the Scale-wise Cascade Convolution (SWCC) module is designed to capture multiscale features and expand the receptive field. Secondly, the Adaptive Semantic Aggregation (ASA) module facilitates adaptive and effective information fusion across two distinct feature maps. Our experiments were conducted on the publicly available cropandweed dataset and SB20 dataset. it yields improved performance over other mainstream segmentation models. Specifically, SWFormer with 52.33M/527.51GFLOPs achieves an mAP of 76.54% and an accuracy of 83.95% on the cropandweed dataset. For the SB20 dataset, it attains an mAP of 61.24% and an accuracy of 79.47%. Overall, the evaluation clearly demonstrates our proposed SWFormer is conducive to promoting further research in the area of Precision Agriculture.</p></div>","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":null,"pages":null},"PeriodicalIF":5.2,"publicationDate":"2024-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1319157824002337/pdfft?md5=279cbd7e6876b807bb7098b77b2e40a6&pid=1-s2.0-S1319157824002337-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141846032","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Diverse representation-guided graph learning for multi-view metric clustering 多视角度量聚类的多元表征引导图学习
IF 5.2 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-07-24 DOI: 10.1016/j.jksuci.2024.102129

Multi-view graph clustering has garnered tremendous interest for its capability to effectively segregate data by harnessing information from multiple graphs representing distinct views. Despite the advances, conventional methods commonly construct similarity graphs straightway from raw features, leading to suboptimal outcomes due to noise or outliers. To address this, latent representation-based graph clustering has emerged. However, it often hypothesizes that multiple views share a fixed-dimensional coefficient matrix, potentially resulting in useful information loss and limited representation capabilities. Additionally, many methods exploit Euclidean distance as a similarity metric, which may inaccurately measure linear relationships between samples. To tackle these challenges, we develop a novel diverse representation-guided graph learning for multi-view metric clustering (DRGMMC). Concretely, raw sample matrix from each view is first projected into diverse latent spaces to capture comprehensive knowledge. Subsequently, a popular metric is leveraged to adaptively learn similarity graphs with linearity-aware based on attained coefficient matrices. Furthermore, a self-weighted fusion strategy and Laplacian rank constraint are introduced to output clustering results directly. Consequently, our model merges diverse representation learning, metric learning, consensus graph learning, and data clustering into a joint model, reinforcing each other for holistic optimization. Substantial experimental findings substantiate that DRGMMC outperforms most advanced graph clustering techniques.

多视图聚类能够通过利用代表不同视图的多个图形中的信息来有效地分离数据,因而引起了人们的极大兴趣。尽管取得了进步,但传统方法通常直接从原始特征构建相似性图,由于噪声或异常值的存在,导致结果不理想。为了解决这个问题,出现了基于潜在表示的图形聚类。然而,这种方法通常假设多个视图共享一个固定维度的系数矩阵,可能会造成有用信息的损失和有限的表示能力。此外,许多方法利用欧氏距离作为相似性度量,这可能会不准确地衡量样本之间的线性关系。为了应对这些挑战,我们开发了一种新颖的多视图度量聚类(DRGMMC)的多样化表示引导图学习方法。具体来说,首先将每个视图的原始样本矩阵投射到不同的潜在空间,以获取全面的知识。然后,利用一种流行的度量方法,基于获得的系数矩阵,自适应地学习具有线性感知的相似性图。此外,我们还引入了自加权融合策略和拉普拉斯秩约束,以直接输出聚类结果。因此,我们的模型将不同的表示学习、度量学习、共识图学习和数据聚类合并成一个联合模型,相互促进,实现整体优化。大量实验结果证明,DRGMMC 优于大多数先进的图聚类技术。
{"title":"Diverse representation-guided graph learning for multi-view metric clustering","authors":"","doi":"10.1016/j.jksuci.2024.102129","DOIUrl":"10.1016/j.jksuci.2024.102129","url":null,"abstract":"<div><p>Multi-view graph clustering has garnered tremendous interest for its capability to effectively segregate data by harnessing information from multiple graphs representing distinct views. Despite the advances, conventional methods commonly construct similarity graphs straightway from raw features, leading to suboptimal outcomes due to noise or outliers. To address this, latent representation-based graph clustering has emerged. However, it often hypothesizes that multiple views share a fixed-dimensional coefficient matrix, potentially resulting in useful information loss and limited representation capabilities. Additionally, many methods exploit Euclidean distance as a similarity metric, which may inaccurately measure linear relationships between samples. To tackle these challenges, we develop a novel diverse representation-guided graph learning for multi-view metric clustering (DRGMMC). Concretely, raw sample matrix from each view is first projected into diverse latent spaces to capture comprehensive knowledge. Subsequently, a popular metric is leveraged to adaptively learn similarity graphs with linearity-aware based on attained coefficient matrices. Furthermore, a self-weighted fusion strategy and Laplacian rank constraint are introduced to output clustering results directly. Consequently, our model merges diverse representation learning, metric learning, consensus graph learning, and data clustering into a joint model, reinforcing each other for holistic optimization. Substantial experimental findings substantiate that DRGMMC outperforms most advanced graph clustering techniques.</p></div>","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":null,"pages":null},"PeriodicalIF":5.2,"publicationDate":"2024-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1319157824002180/pdfft?md5=7f0dd8a20b2ca00d3561c9fb487ffc79&pid=1-s2.0-S1319157824002180-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141838749","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of King Saud University-Computer and Information Sciences
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1