首页 > 最新文献

Journal of King Saud University-Computer and Information Sciences最新文献

英文 中文
Content-based quality evaluation of scientific papers using coarse feature and knowledge entity network 利用粗特征和知识实体网络对科技论文进行基于内容的质量评估
IF 5.2 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-07-01 DOI: 10.1016/j.jksuci.2024.102119
Zhongyi Wang , Haoxuan Zhang , Haihua Chen , Yunhe Feng , Junhua Ding

Pre-evaluating scientific paper quality aids in alleviating peer review pressure and fostering scientific advancement. Although prior studies have identified numerous quality-related features, their effectiveness and representativeness of paper content remain to be comprehensively investigated. Addressing this issue, we propose a content-based interpretable method for pre-evaluating the quality of scientific papers. Firstly, we define quality attributes of computer science (CS) papers as integrity, clarity, novelty, and significance, based on peer review criteria from 11 top-tier CS conferences. We formulate the problem as two classification tasks: Accepted/Disputed/Rejected (ADR) and Accepted/Rejected (AR). Subsequently, we construct fine-grained features from metadata and knowledge entity networks, including text structure, readability, references, citations, semantic novelty, and network structure. We empirically evaluate our method using the ICLR paper dataset, achieving optimal performance with the Random Forest model, yielding F1 scores of 0.715 and 0.762 for the two tasks, respectively. Through feature analysis and case studies employing SHAP interpretable methods, we demonstrate that the proposed features enhance the performance of machine learning models in scientific paper quality evaluation, offering interpretable evidence for model decisions.

对科学论文质量进行预先评估有助于减轻同行评审压力,促进科学进步。尽管之前的研究已经发现了许多与质量相关的特征,但它们的有效性和论文内容的代表性仍有待全面研究。针对这一问题,我们提出了一种基于内容的可解释方法,用于预先评估科学论文的质量。首先,我们根据 11 个顶级计算机科学(CS)会议的同行评审标准,将计算机科学(CS)论文的质量属性定义为完整性、清晰度、新颖性和重要性。我们将问题表述为两个分类任务:已接受/有争议/被拒绝 (ADR) 和已接受/被拒绝 (AR)。随后,我们从元数据和知识实体网络中构建了细粒度特征,包括文本结构、可读性、参考文献、引用、语义新颖性和网络结构。我们使用 ICLR 论文数据集对我们的方法进行了实证评估,结果表明随机森林模型的性能最佳,两项任务的 F1 分数分别为 0.715 和 0.762。通过使用 SHAP 可解释方法进行特征分析和案例研究,我们证明了所提出的特征提高了机器学习模型在科学论文质量评估中的性能,为模型决策提供了可解释的证据。
{"title":"Content-based quality evaluation of scientific papers using coarse feature and knowledge entity network","authors":"Zhongyi Wang ,&nbsp;Haoxuan Zhang ,&nbsp;Haihua Chen ,&nbsp;Yunhe Feng ,&nbsp;Junhua Ding","doi":"10.1016/j.jksuci.2024.102119","DOIUrl":"https://doi.org/10.1016/j.jksuci.2024.102119","url":null,"abstract":"<div><p>Pre-evaluating scientific paper quality aids in alleviating peer review pressure and fostering scientific advancement. Although prior studies have identified numerous quality-related features, their effectiveness and representativeness of paper content remain to be comprehensively investigated. Addressing this issue, we propose a content-based interpretable method for pre-evaluating the quality of scientific papers. Firstly, we define quality attributes of computer science (CS) papers as <em>integrity</em>, <em>clarity</em>, <em>novelty</em>, and <em>significance</em>, based on peer review criteria from 11 top-tier CS conferences. We formulate the problem as two classification tasks: <em>Accepted/Disputed/Rejected</em> (ADR) and <em>Accepted/Rejected</em> (AR). Subsequently, we construct fine-grained features from metadata and knowledge entity networks, including text structure, readability, references, citations, semantic novelty, and network structure. We empirically evaluate our method using the ICLR paper dataset, achieving optimal performance with the Random Forest model, yielding F1 scores of 0.715 and 0.762 for the two tasks, respectively. Through feature analysis and case studies employing SHAP interpretable methods, we demonstrate that the proposed features enhance the performance of machine learning models in scientific paper quality evaluation, offering interpretable evidence for model decisions.</p></div>","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":null,"pages":null},"PeriodicalIF":5.2,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1319157824002088/pdfft?md5=8465c32ce03880aedb7757f4009be933&pid=1-s2.0-S1319157824002088-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141607709","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimized reversible data hiding technique based on multidirectional prediction error histogram and fluctuation-based adaptation 基于多向预测误差直方图和波动适应的优化可逆数据隐藏技术
IF 5.2 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-07-01 DOI: 10.1016/j.jksuci.2024.102112
Dima Kasasbeh, Mohammed Anbar

Reversible Data Hiding Techniques (RDH) play an increasingly pivotal role in the field of cybersecurity. Overlooking the properties of the carrier image and neglecting the influence of texture can lead to undesirable distortions and irreversible data hiding. In this paper, a novel block-based RDH technique is proposed that harnesses the relative correlation between multidirectional prediction error histograms (MPEH) and pixel fluctuation values to mitigate undesirable distortions and enable RDH, thereby ensuring heightened security and efficiency in the distribution process and improving the robustness of the block-based RDH technique. The proposed technique uses a combination of pixel fluctuation and local complexity measures to determine the best embedding locations within smooth regions based on the cumulative peak regions of the MPEH with the lowest fluctuation values. Similarly, during the extraction process, the same optimal embedding locations are identified within smooth regions. The multidirectional prediction error histograms are then used to accurately extract the hidden data from the pixels with lower fluctuation values. Overall, the experimental results highlight the effectiveness and superiority of the proposed technique in various aspects of data embedding and extraction, and demonstrate that the proposed technique outperforms other state-of-the-art RDH techniques in terms of embedding capacity, image quality, and robustness against attacks. The average Peak Signal-to-Noise Ratio (PSNR) achieved with an embedding capacity ranging from 0.5×104 bits to 5×104 bits is 52.72 dB. Additionally, there are no errors in retrieving the carrier image and secret data.

可逆数据隐藏技术(RDH)在网络安全领域发挥着越来越重要的作用。忽视载体图像的特性和纹理的影响会导致不良失真和不可逆的数据隐藏。本文提出了一种基于块的新型 RDH 技术,利用多向预测误差直方图(MPEH)和像素波动值之间的相对相关性来减轻不良失真并实现 RDH,从而确保提高分发过程的安全性和效率,并改善基于块的 RDH 技术的鲁棒性。所提出的技术结合使用了像素波动和局部复杂度测量方法,根据波动值最低的 MPEH 累加峰值区域,确定平滑区域内的最佳嵌入位置。同样,在提取过程中,也会在平滑区域内确定相同的最佳嵌入位置。然后利用多向预测误差直方图,从波动值较低的像素中精确提取隐藏数据。总之,实验结果凸显了所提技术在数据嵌入和提取各方面的有效性和优越性,并证明所提技术在嵌入容量、图像质量和抗攻击鲁棒性方面优于其他最先进的 RDH 技术。在嵌入容量为 0.5×104 位到 5×104 位的情况下,平均峰值信噪比(PSNR)为 52.72 dB。此外,在检索载波图像和秘密数据时不会出现错误。
{"title":"Optimized reversible data hiding technique based on multidirectional prediction error histogram and fluctuation-based adaptation","authors":"Dima Kasasbeh,&nbsp;Mohammed Anbar","doi":"10.1016/j.jksuci.2024.102112","DOIUrl":"https://doi.org/10.1016/j.jksuci.2024.102112","url":null,"abstract":"<div><p>Reversible Data Hiding Techniques (RDH) play an increasingly pivotal role in the field of cybersecurity. Overlooking the properties of the carrier image and neglecting the influence of texture can lead to undesirable distortions and irreversible data hiding. In this paper, a novel block-based RDH technique is proposed that harnesses the relative correlation between multidirectional prediction error histograms (MPEH) and pixel fluctuation values to mitigate undesirable distortions and enable RDH, thereby ensuring heightened security and efficiency in the distribution process and improving the robustness of the block-based RDH technique. The proposed technique uses a combination of pixel fluctuation and local complexity measures to determine the best embedding locations within smooth regions based on the cumulative peak regions of the MPEH with the lowest fluctuation values. Similarly, during the extraction process, the same optimal embedding locations are identified within smooth regions. The multidirectional prediction error histograms are then used to accurately extract the hidden data from the pixels with lower fluctuation values. Overall, the experimental results highlight the effectiveness and superiority of the proposed technique in various aspects of data embedding and extraction, and demonstrate that the proposed technique outperforms other state-of-the-art RDH techniques in terms of embedding capacity, image quality, and robustness against attacks. The average Peak Signal-to-Noise Ratio (PSNR) achieved with an embedding capacity ranging from <span><math><mrow><mn>0</mn><mo>.</mo><mn>5</mn><mo>×</mo><mn>1</mn><msup><mrow><mn>0</mn></mrow><mrow><mn>4</mn></mrow></msup></mrow></math></span> bits to <span><math><mrow><mn>5</mn><mo>×</mo><mn>1</mn><msup><mrow><mn>0</mn></mrow><mrow><mn>4</mn></mrow></msup></mrow></math></span> bits is 52.72 dB. Additionally, there are no errors in retrieving the carrier image and secret data.</p></div>","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":null,"pages":null},"PeriodicalIF":5.2,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1319157824002015/pdfft?md5=3c4ab46b7edb28a126a4dde971428787&pid=1-s2.0-S1319157824002015-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141540365","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing autonomous driving through intelligent navigation: A comprehensive improvement approach 通过智能导航加强自动驾驶:全面改进方法
IF 5.2 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-07-01 DOI: 10.1016/j.jksuci.2024.102108
Zihao Xu , Yinghao Meng , Zhen Yin , Bowen Liu , Youzhi Zhang , Mengmeng Lin

In this paper, an intelligent navigation system is developed to achieve accurate and rapid response to autonomous driving. The system is improved with three modules: target detection, distance measurement, and navigation obstacle avoidance. In the target detection module, the YOLOv7x-CM model is proposed to improve the efficiency and accuracy of target detection by introducing the CBAM attention mechanism and MPDioU loss function. In the obstacle distance measurement module, the concept of an off-center angle is introduced to optimize the traditional monocular distance measurement method. In the obstacle avoidance module, acceleration jump and steering speed constraints are introduced into the local path planning algorithm TEB, and the TEB-S algorithm is proposed. Finally, this paper evaluates the performance of the system modules using the KITTI dataset and the BDD100K dataset. It is demonstrated that YOLOv7x-CM improves the mAP @ 0.5 metrics by 5.3% and 6.8% on the KITTI dataset and the BDD100K dataset, respectively, and the FPS also increases by 35.4%. Secondly, for the optimized monocular detection method, the average relative distance error is reduced by 9 times. In addition, the proposed TEB-S algorithm has a shorter obstacle avoidance path and higher efficiency than the normal TEB algorithm.

本文开发了一种智能导航系统,以实现准确、快速的自动驾驶响应。该系统通过目标检测、距离测量和导航避障三个模块进行改进。在目标检测模块中,提出了 YOLOv7x-CM 模型,通过引入 CBAM 注意机制和 MPDioU 损失函数来提高目标检测的效率和准确性。在障碍物距离测量模块中,引入了偏心角的概念,优化了传统的单目距离测量方法。在避障模块中,在局部路径规划算法 TEB 中引入了加速跳跃和转向速度约束,并提出了 TEB-S 算法。最后,本文使用 KITTI 数据集和 BDD100K 数据集评估了系统模块的性能。结果表明,YOLOv7x-CM 在 KITTI 数据集和 BDD100K 数据集上的 mAP @ 0.5 指标分别提高了 5.3% 和 6.8%,FPS 也提高了 35.4%。其次,对于优化的单目检测方法,平均相对距离误差减少了 9 倍。此外,与普通 TEB 算法相比,建议的 TEB-S 算法具有更短的避障路径和更高的效率。
{"title":"Enhancing autonomous driving through intelligent navigation: A comprehensive improvement approach","authors":"Zihao Xu ,&nbsp;Yinghao Meng ,&nbsp;Zhen Yin ,&nbsp;Bowen Liu ,&nbsp;Youzhi Zhang ,&nbsp;Mengmeng Lin","doi":"10.1016/j.jksuci.2024.102108","DOIUrl":"https://doi.org/10.1016/j.jksuci.2024.102108","url":null,"abstract":"<div><p>In this paper, an intelligent navigation system is developed to achieve accurate and rapid response to autonomous driving. The system is improved with three modules: target detection, distance measurement, and navigation obstacle avoidance. In the target detection module, the YOLOv7x-CM model is proposed to improve the efficiency and accuracy of target detection by introducing the CBAM attention mechanism and MPDioU loss function. In the obstacle distance measurement module, the concept of an off-center angle is introduced to optimize the traditional monocular distance measurement method. In the obstacle avoidance module, acceleration jump and steering speed constraints are introduced into the local path planning algorithm TEB, and the TEB-S algorithm is proposed. Finally, this paper evaluates the performance of the system modules using the KITTI dataset and the BDD100K dataset. It is demonstrated that YOLOv7x-CM improves the mAP @ 0.5 metrics by 5.3% and 6.8% on the KITTI dataset and the BDD100K dataset, respectively, and the FPS also increases by 35.4%. Secondly, for the optimized monocular detection method, the average relative distance error is reduced by 9 times. In addition, the proposed TEB-S algorithm has a shorter obstacle avoidance path and higher efficiency than the normal TEB algorithm.</p></div>","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":null,"pages":null},"PeriodicalIF":5.2,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1319157824001976/pdfft?md5=3501a1d07cb3bae41fa97be3fa787122&pid=1-s2.0-S1319157824001976-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141540367","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An empirical study on the state-of-the-art methods for requirement-to-code traceability link recovery 关于恢复从需求到代码的可追溯性链接的最新方法的实证研究
IF 5.2 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-07-01 DOI: 10.1016/j.jksuci.2024.102118
Bangchao Wang , Zhiyuan Zou , Hongyan Wan , Yuanbang Li , Yang Deng , Xingfu Li

Requirements-to-code traceability link recovery (RC-TLR) can establish connections between requirements and target code artifacts, which is critical for the maintenance and evolution of large software systems. However, to the best of our knowledge, there is no existing experimental study focused on state-of-the-art (SOTA) methods for the RC-TLR problem, and there is also a lack of uniform benchmarks for evaluating new methods in the field. We developed a framework to identify SOTA methods using the Systematic Literature Review method and applied it to research in the RC-TLR field from 2018 to 2023. Through experiments replication on 13 datasets using 6 methods, we observed that for information retrieval-based methods, Close Relations between Target artifacts-based method (CRT), TraceAbility Recovery by Consensual biTerms (TAROT), and Fine-grained TLR (FTLR) performed well on COEST dataset, while Combining Part-Of-Speech with information-retrieval techniques (Conpos) and TAROT achieve promising results in large datasets. As concerns machine learning-based methods, Random Forest consistently exhibits strong performances on all datasets. We hope that this study can provide a comparative benchmark for performance evaluation in the RC-TLR field. The resource repository that we have established is expected to alleviate the workload of researchers in performance analysis, and promote progress of the field.

从需求到代码的可追溯性链接恢复(RC-TLR)可以在需求和目标代码工件之间建立联系,这对大型软件系统的维护和演进至关重要。然而,据我们所知,目前还没有针对 RC-TLR 问题的最先进(SOTA)方法的实验研究,也缺乏评估该领域新方法的统一基准。我们利用系统文献综述法开发了一个识别 SOTA 方法的框架,并将其应用于 2018 年至 2023 年 RC-TLR 领域的研究。通过在 13 个数据集上使用 6 种方法进行实验复制,我们观察到,在基于信息检索的方法中,基于目标人工制品之间的密切关系的方法(CRT)、基于共识双术语的可追溯性恢复(TAROT)和细粒度 TLR(FTLR)在 COEST 数据集上表现良好,而将部分语音与信息检索技术相结合的方法(Conpos)和 TAROT 在大型数据集上取得了可喜的成果。至于基于机器学习的方法,随机森林在所有数据集上都表现出了强劲的性能。我们希望这项研究能为 RC-TLR 领域的性能评估提供一个比较基准。我们建立的资源库有望减轻研究人员在性能分析方面的工作量,并促进该领域的进步。
{"title":"An empirical study on the state-of-the-art methods for requirement-to-code traceability link recovery","authors":"Bangchao Wang ,&nbsp;Zhiyuan Zou ,&nbsp;Hongyan Wan ,&nbsp;Yuanbang Li ,&nbsp;Yang Deng ,&nbsp;Xingfu Li","doi":"10.1016/j.jksuci.2024.102118","DOIUrl":"https://doi.org/10.1016/j.jksuci.2024.102118","url":null,"abstract":"<div><p>Requirements-to-code traceability link recovery (RC-TLR) can establish connections between requirements and target code artifacts, which is critical for the maintenance and evolution of large software systems. However, to the best of our knowledge, there is no existing experimental study focused on state-of-the-art (SOTA) methods for the RC-TLR problem, and there is also a lack of uniform benchmarks for evaluating new methods in the field. We developed a framework to identify SOTA methods using the Systematic Literature Review method and applied it to research in the RC-TLR field from 2018 to 2023. Through experiments replication on 13 datasets using 6 methods, we observed that for information retrieval-based methods, Close Relations between Target artifacts-based method (CRT), TraceAbility Recovery by Consensual biTerms (TAROT), and Fine-grained TLR (FTLR) performed well on COEST dataset, while Combining Part-Of-Speech with information-retrieval techniques (Conpos) and TAROT achieve promising results in large datasets. As concerns machine learning-based methods, Random Forest consistently exhibits strong performances on all datasets. We hope that this study can provide a comparative benchmark for performance evaluation in the RC-TLR field. The resource repository that we have established is expected to alleviate the workload of researchers in performance analysis, and promote progress of the field.</p></div>","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":null,"pages":null},"PeriodicalIF":5.2,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1319157824002076/pdfft?md5=8ae41f972f5fcb180b95390116c548b9&pid=1-s2.0-S1319157824002076-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141595548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Corrigendum to “Social media sentiment analysis and opinion mining in public security: Taxonomy, trend analysis, issues and future directions” [J. King Saud Univ. – Comput. Inform. Sci. 35(9) (2023) 101776] 公共安全中的社交媒体情感分析和意见挖掘:分类、趋势分析、问题和未来方向"[J. King Saud Univ.
IF 5.2 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-07-01 DOI: 10.1016/j.jksuci.2024.102121
{"title":"Corrigendum to “Social media sentiment analysis and opinion mining in public security: Taxonomy, trend analysis, issues and future directions” [J. King Saud Univ. – Comput. Inform. Sci. 35(9) (2023) 101776]","authors":"","doi":"10.1016/j.jksuci.2024.102121","DOIUrl":"10.1016/j.jksuci.2024.102121","url":null,"abstract":"","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":null,"pages":null},"PeriodicalIF":5.2,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1319157824002106/pdfft?md5=58e9bbc651755fea7d7a677d6344355b&pid=1-s2.0-S1319157824002106-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141696587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Scalability and diversity of StarGANv2-VC in Arabic emotional voice conversion: Overcoming data limitations and enhancing performance StarGANv2-VC 在阿拉伯语情感语音转换中的可扩展性和多样性:克服数据限制并提高性能
IF 5.2 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-07-01 DOI: 10.1016/j.jksuci.2024.102091
Ali H. Meftah , Yousef A. Alotaibi , Sid Ahmed Selouani

Emotional Voice Conversion (EVC) for under-resourced languages like Arabic faces challenges due to limited emotional speech data. This study explored strategies to mitigate dataset scarcity and improve Arabic EVC performance. Fundamental experiments (Speaker-Dependent, Gender-Dependent, Gender-Independent) were conducted using the KSUEmotions dataset to analyze speaker, gender, and model impacts. Data augmentation techniques like time stretching and phase shuffling artificially increased data diversity. Attention mechanisms integrated into StarGANv2-VC aimed to better capture emotional cues. Transfer learning leveraged the larger English Emotional Speech Database (ESD) to enhance the Arabic system. A novel “Reordering Speaker-Emotion Data” approach treated each emotion as a separate speaker to expand the emotional variability. Our comprehensive approach, combining transfer learning, data augmentation, and architectural modifications, demonstrates the potential to overcome dataset limitations and enhance the performance of Arabic EVC systems.

由于情感语音数据有限,阿拉伯语等资源不足的语言的情感语音转换(EVC)面临挑战。本研究探讨了缓解数据集稀缺性和提高阿拉伯语 EVC 性能的策略。使用 KSUEmotions 数据集进行了基本实验(与说话者相关、与性别相关、与性别无关),以分析说话者、性别和模型的影响。时间拉伸和相位洗牌等数据增强技术人为地增加了数据的多样性。集成到 StarGANv2-VC 中的注意机制旨在更好地捕捉情感线索。迁移学习利用更大的英语情感语音数据库(ESD)来增强阿拉伯语系统。一种新颖的 "重新排列说话者-情感数据 "方法将每种情感作为一个单独的说话者来处理,从而扩大了情感的可变性。我们的综合方法结合了迁移学习、数据扩充和架构修改,展示了克服数据集限制和提高阿拉伯语 EVC 系统性能的潜力。
{"title":"Scalability and diversity of StarGANv2-VC in Arabic emotional voice conversion: Overcoming data limitations and enhancing performance","authors":"Ali H. Meftah ,&nbsp;Yousef A. Alotaibi ,&nbsp;Sid Ahmed Selouani","doi":"10.1016/j.jksuci.2024.102091","DOIUrl":"https://doi.org/10.1016/j.jksuci.2024.102091","url":null,"abstract":"<div><p>Emotional Voice Conversion (EVC) for under-resourced languages like Arabic faces challenges due to limited emotional speech data. This study explored strategies to mitigate dataset scarcity and improve Arabic EVC performance. Fundamental experiments (Speaker-Dependent, Gender-Dependent, Gender-Independent) were conducted using the KSUEmotions dataset to analyze speaker, gender, and model impacts. Data augmentation techniques like time stretching and phase shuffling artificially increased data diversity. Attention mechanisms integrated into StarGANv2-VC aimed to better capture emotional cues. Transfer learning leveraged the larger English Emotional Speech Database (ESD) to enhance the Arabic system. A novel “Reordering Speaker-Emotion Data” approach treated each emotion as a separate speaker to expand the emotional variability. Our comprehensive approach, combining transfer learning, data augmentation, and architectural modifications, demonstrates the potential to overcome dataset limitations and enhance the performance of Arabic EVC systems.</p></div>","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":null,"pages":null},"PeriodicalIF":5.2,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1319157824001800/pdfft?md5=6225b12a8e2dd7caf92c9cebc9d43dd5&pid=1-s2.0-S1319157824001800-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141479892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Pairing-free Proxy Re-Encryption scheme with Equality Test for data security of IoT 针对物联网数据安全的无配对代理再加密方案与平等性测试
IF 5.2 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-07-01 DOI: 10.1016/j.jksuci.2024.102105
Gang Han , Le Li , Baodong Qin , Dong Zheng

The construction of IoT cloud platform brings great convenience to the storage of massive IoT node data. To ensure security of data collected by the IoT nodes, the data must be encrypted when the nodes upload it. However, it arises some challenge problems, such as efficient retrieval on encrypted data, secure de-duplication and massive data recovery. Proxy Re-Encryption with Equality Test (PREET) can be used to solve these problems. But, existing PREET schemes are based on bilinear pairings, which have low operational efficiency. In order to improve the overall operational efficiency, this paper constructs a Pairing-Free PREET scheme (PF-PREET). Security of this scheme is guaranteed by the Gap Computational Diffie–Hellman assumption in random oracle model. It is demonstrated that the PF-PREET scheme can achieve ciphertext indistinguishability for malicious users and one-wayness for malicious servers. It still has the same security level compared to existing PREET schemes by expanding the scope of the equality test. We discussed the efficiency of the proposed PF-PREET scheme in terms of overall running time, average running time of each phase and time overhead of tag test. The simulated experiment shows a certain improvement compared to PREET schemes using bilinear pairing.

物联网云平台的建设为海量物联网节点数据的存储带来了极大的便利。为确保物联网节点采集数据的安全性,节点上传数据时必须对数据进行加密。然而,这也带来了一些难题,如加密数据的高效检索、安全重复数据删除和海量数据恢复等。带等价测试的代理重加密(PREET)可用于解决这些问题。但是,现有的 PREET 方案基于双线性配对,运行效率较低。为了提高整体运行效率,本文构建了一种无配对 PREET 方案(PF-PREET)。该方案的安全性由随机甲骨文模型中的间隙计算 Diffie-Hellman 假设来保证。研究表明,PF-PREET 方案可以实现恶意用户的密文不可区分性和恶意服务器的单向性。与现有的 PREET 方案相比,PF-PREET 方案通过扩大等价检验的范围,仍然具有相同的安全等级。我们从总体运行时间、每个阶段的平均运行时间和标签测试的时间开销三个方面讨论了所提出的 PF-PREET 方案的效率。模拟实验表明,与使用双线性配对的 PREET 方案相比,PF-PREET 方案有了一定的改进。
{"title":"Pairing-free Proxy Re-Encryption scheme with Equality Test for data security of IoT","authors":"Gang Han ,&nbsp;Le Li ,&nbsp;Baodong Qin ,&nbsp;Dong Zheng","doi":"10.1016/j.jksuci.2024.102105","DOIUrl":"https://doi.org/10.1016/j.jksuci.2024.102105","url":null,"abstract":"<div><p>The construction of IoT cloud platform brings great convenience to the storage of massive IoT node data. To ensure security of data collected by the IoT nodes, the data must be encrypted when the nodes upload it. However, it arises some challenge problems, such as efficient retrieval on encrypted data, secure de-duplication and massive data recovery. Proxy Re-Encryption with Equality Test (PREET) can be used to solve these problems. But, existing PREET schemes are based on bilinear pairings, which have low operational efficiency. In order to improve the overall operational efficiency, this paper constructs a Pairing-Free PREET scheme (PF-PREET). Security of this scheme is guaranteed by the Gap Computational Diffie–Hellman assumption in random oracle model. It is demonstrated that the PF-PREET scheme can achieve ciphertext indistinguishability for malicious users and one-wayness for malicious servers. It still has the same security level compared to existing PREET schemes by expanding the scope of the equality test. We discussed the efficiency of the proposed PF-PREET scheme in terms of overall running time, average running time of each phase and time overhead of tag test. The simulated experiment shows a certain improvement compared to PREET schemes using bilinear pairing.</p></div>","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":null,"pages":null},"PeriodicalIF":5.2,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1319157824001940/pdfft?md5=b5a75e08f9f6702a67314403d83dd189&pid=1-s2.0-S1319157824001940-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141479317","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Rumor gatekeepers: Unsupervised ranking of Arabic twitter authorities for information verification 谣言守门人:对阿拉伯语 twitter 权威信息进行无监督排序以进行信息验证
IF 5.2 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-07-01 DOI: 10.1016/j.jksuci.2024.102111
Hend Aldahmash , Abdulrahman Alothaim , Abdulrahman Mirza

The advent of online social networks (OSNs) has catalyzed the formation of novel learning communities. Identifying experts within OSNs has become a critical component for facilitating knowledge exchange and enhancing self-awareness, particularly in contexts such as rumor verification processes. Research efforts aimed at locating authorities in OSNs are scant, largely due to the scarcity of annotated datasets. This work represents a contribution to the domain of unsupervised learning to address the challenge of authorities’ identification in Twitter. We have employed advanced natural language processing technique to transfer knowledge concerning topics in the Arabic language and to discern the semantic connections among candidates within Twitter in zero-shot learning. We take advantage of the Single-labeled Arabic News Articles Dataset (SANAD) to perform the process of extracting domain features and applying these features in finding authorities using the Authority Finding in Arabic Twitter (AuFIN) dataset. Our evaluation assessed the extent of extracted topical features transferred and the efficacy of authorities’ retrieval in comparison to the latest unsupervised models in this domain. Our approach successfully extracted and integrated the limited available topical semantic features of the language into the representation of candidates. The findings indicate that our hybrid model surpasses those that rely solely on lexical features of language and network topology, as well as other contemporary approaches to topic-specific expert finding.

在线社交网络(OSN)的出现促进了新型学习社区的形成。在 OSNs 中识别专家已成为促进知识交流和增强自我意识的重要组成部分,尤其是在谣言验证过程等情况下。主要由于注释数据集的稀缺,旨在定位 OSN 中权威人士的研究工作非常少。这项工作是对无监督学习领域的一个贡献,旨在解决 Twitter 中权威人士识别的难题。我们采用了先进的自然语言处理技术来转移阿拉伯语中与主题相关的知识,并在零点学习中辨别 Twitter 中候选人之间的语义联系。我们利用单标签阿拉伯语新闻文章数据集(SANAD)来执行提取领域特征的过程,并将这些特征应用于使用阿拉伯语 Twitter 中的权威搜索(AuFIN)数据集来查找权威。与该领域最新的无监督模型相比,我们的评价评估了所提取的专题特征的转移程度和权威检索的有效性。我们的方法成功地提取了语言中有限的可用主题语义特征,并将其整合到候选人的表征中。研究结果表明,我们的混合模型超越了那些仅依赖语言词汇特征和网络拓扑结构的模型,也超越了其他当代的特定主题专家搜索方法。
{"title":"Rumor gatekeepers: Unsupervised ranking of Arabic twitter authorities for information verification","authors":"Hend Aldahmash ,&nbsp;Abdulrahman Alothaim ,&nbsp;Abdulrahman Mirza","doi":"10.1016/j.jksuci.2024.102111","DOIUrl":"https://doi.org/10.1016/j.jksuci.2024.102111","url":null,"abstract":"<div><p>The advent of online social networks (OSNs) has catalyzed the formation of novel learning communities. Identifying experts within OSNs has become a critical component for facilitating knowledge exchange and enhancing self-awareness, particularly in contexts such as rumor verification processes. Research efforts aimed at locating authorities in OSNs are scant, largely due to the scarcity of annotated datasets. This work represents a contribution to the domain of unsupervised learning to address the challenge of authorities’ identification in Twitter. We have employed advanced natural language processing technique to transfer knowledge concerning topics in the Arabic language and to discern the semantic connections among candidates within Twitter in zero-shot learning. We take advantage of the Single-labeled Arabic News Articles Dataset (SANAD) to perform the process of extracting domain features and applying these features in finding authorities using the Authority Finding in Arabic Twitter (AuFIN) dataset. Our evaluation assessed the extent of extracted topical features transferred and the efficacy of authorities’ retrieval in comparison to the latest unsupervised models in this domain. Our approach successfully extracted and integrated the limited available topical semantic features of the language into the representation of candidates. The findings indicate that our hybrid model surpasses those that rely solely on lexical features of language and network topology, as well as other contemporary approaches to topic-specific expert finding.</p></div>","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":null,"pages":null},"PeriodicalIF":5.2,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1319157824002003/pdfft?md5=497e343dafa3a0f464edc892c03b7186&pid=1-s2.0-S1319157824002003-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141542406","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An efficient steganography scheme based on wavelet transformation for side-information estimation 基于小波变换的高效隐写术方案,可用于侧面信息估算
IF 5.2 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-07-01 DOI: 10.1016/j.jksuci.2024.102109
Tian Wu , Xuan Hu , Chunnian Liu , Yizhe Wang , Yiping Zhu

Previous studies have specifically demonstrated that incorporating high-quality side-information can notably enhance the steganographic security of JPEG images. To obtain more precise side-information estimates, researchers utilize sophisticated deblocking or denoising filters, which enhance security but require a substantial amount of time. The challenge lies in achieving a significant improvement in security and efficiency. This paper introduces a steganographic network named WTSNet, which leverages wavelet transform for estimating side-information through iterative optimization in both the pixel and wavelet domains. The wavelet domain is purposely designed to capture intricate high-frequency texture details and facilitate precise estimation of the side-information. Furthermore, the embedding cost is adjusted based on the polarity of the estimated rounding error, ensuring both secure and efficient performance. Experimental results demonstrate that the proposed scheme substantially enhances the security of conventional JPEG steganography, which relies on additive distortion. Moreover, it outperforms the state-of-the-art JPEG steganography methods based on estimating side-information while consuming less time.

以往的研究特别表明,加入高质量的侧信息可以显著提高 JPEG 图像的隐写安全性。为了获得更精确的侧信息估计值,研究人员使用了复杂的解锁或去噪滤波器,这虽然提高了安全性,但却需要大量的时间。如何显著提高安全性和效率是一项挑战。本文介绍了一种名为 WTSNet 的隐写网络,它利用小波变换,通过在像素域和小波域进行迭代优化来估计侧信息。小波域旨在捕捉复杂的高频纹理细节,便于精确估算侧信息。此外,嵌入成本会根据估计舍入误差的极性进行调整,从而确保安全高效的性能。实验结果表明,所提出的方案大大提高了传统 JPEG 隐写术的安全性,因为传统的隐写术依赖于加法失真。此外,它在耗时更少的情况下,优于基于估计侧信息的最先进的 JPEG 隐写方法。
{"title":"An efficient steganography scheme based on wavelet transformation for side-information estimation","authors":"Tian Wu ,&nbsp;Xuan Hu ,&nbsp;Chunnian Liu ,&nbsp;Yizhe Wang ,&nbsp;Yiping Zhu","doi":"10.1016/j.jksuci.2024.102109","DOIUrl":"https://doi.org/10.1016/j.jksuci.2024.102109","url":null,"abstract":"<div><p>Previous studies have specifically demonstrated that incorporating high-quality side-information can notably enhance the steganographic security of JPEG images. To obtain more precise side-information estimates, researchers utilize sophisticated deblocking or denoising filters, which enhance security but require a substantial amount of time. The challenge lies in achieving a significant improvement in security and efficiency. This paper introduces a steganographic network named WTSNet, which leverages wavelet transform for estimating side-information through iterative optimization in both the pixel and wavelet domains. The wavelet domain is purposely designed to capture intricate high-frequency texture details and facilitate precise estimation of the side-information. Furthermore, the embedding cost is adjusted based on the polarity of the estimated rounding error, ensuring both secure and efficient performance. Experimental results demonstrate that the proposed scheme substantially enhances the security of conventional JPEG steganography, which relies on additive distortion. Moreover, it outperforms the state-of-the-art JPEG steganography methods based on estimating side-information while consuming less time.</p></div>","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":null,"pages":null},"PeriodicalIF":5.2,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1319157824001988/pdfft?md5=05628e356a355f1a9b1bc8eb5091e465&pid=1-s2.0-S1319157824001988-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141595578","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A review of advances in integrating gene regulatory networks and metabolic networks for designing strain optimization 综述整合基因调控网络和代谢网络以优化菌株设计的进展
IF 5.2 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-07-01 DOI: 10.1016/j.jksuci.2024.102120
Ridho Ananda, Kauthar Mohd Daud, Suhaila Zainudin

Strain optimization aims to overproduce valuable metabolites by leveraging an understanding of biological systems, including metabolic networks and gene regulatory networks (GRNs). Accordingly, researchers proposed integrating metabolic networks and GRNs to be analyzed simultaneously. The proposed algorithms from 2002 to 2021 were rFBA, SR-FBA, iFBA, PROM, PROM2.0, TIGER, BeReTa, CoRegFlux, IDREAM, TRFBA, OptRAM, TRIMER, and PRIME. Each algorithm has different characteristics. Thus, using the appropriate algorithm for designing strain optimization is essential. Therefore, a critical review was conducted by synthesizing and analyzing the existing algorithms. Five aspects are discussed in this review: the strategic approaches, model of GRNs, source of GRNs, optimization, supplementary methods, and the programming language used. Based on the review, several algorithms were better at modeling integrated regulatory-metabolic networks with high confidence, i.e., PROM, PROM2.0, and TRFBA. A simulation was applied to six strains. The results show that PROM2.0 best predicted the production rate and time complexity. However, the model is heavily influenced by the quality and quantity of the gene expression data. In addition, there are inconsistencies between GRNs and the gene expression data. Thus, this review also discussed future work based on GRNs and gene expression data.

菌株优化旨在利用对生物系统(包括代谢网络和基因调控网络)的了解,过量生产有价值的代谢物。因此,研究人员提出将代谢网络和基因调控网络结合起来,同时进行分析。从 2002 年到 2021 年,提出的算法有 rFBA、SR-FBA、iFBA、PROM、PROM2.0、TIGER、BeReTa、CoRegFlux、IDREAM、TRFBA、OptRAM、TRIMER 和 PRIME。每种算法都有不同的特点。因此,使用合适的算法进行应变优化设计至关重要。因此,我们对现有算法进行了综合和分析,并进行了严格的审查。本综述从五个方面进行了讨论:战略方法、GRN 模型、GRN 来源、优化、补充方法和使用的编程语言。根据综述,有几种算法(即 PROM、PROM2.0 和 TRFBA)以较高的置信度较好地模拟了综合调控-代谢网络。对六个菌株进行了模拟。结果表明,PROM2.0 对生产率和时间复杂性的预测最好。但是,该模型受基因表达数据质量和数量的影响很大。此外,GRN 与基因表达数据之间也存在不一致。因此,本综述还讨论了基于 GRN 和基因表达数据的未来工作。
{"title":"A review of advances in integrating gene regulatory networks and metabolic networks for designing strain optimization","authors":"Ridho Ananda,&nbsp;Kauthar Mohd Daud,&nbsp;Suhaila Zainudin","doi":"10.1016/j.jksuci.2024.102120","DOIUrl":"https://doi.org/10.1016/j.jksuci.2024.102120","url":null,"abstract":"<div><p>Strain optimization aims to overproduce valuable metabolites by leveraging an understanding of biological systems, including metabolic networks and gene regulatory networks (GRNs). Accordingly, researchers proposed integrating metabolic networks and GRNs to be analyzed simultaneously. The proposed algorithms from 2002 to 2021 were rFBA, SR-FBA, iFBA, PROM, PROM2.0, TIGER, BeReTa, CoRegFlux, IDREAM, TRFBA, OptRAM, TRIMER, and PRIME. Each algorithm has different characteristics. Thus, using the appropriate algorithm for designing strain optimization is essential. Therefore, a critical review was conducted by synthesizing and analyzing the existing algorithms. Five aspects are discussed in this review: the strategic approaches, model of GRNs, source of GRNs, optimization, supplementary methods, and the programming language used. Based on the review, several algorithms were better at modeling integrated regulatory-metabolic networks with high confidence, i.e., PROM, PROM2.0, and TRFBA. A simulation was applied to six strains. The results show that PROM2.0 best predicted the production rate and time complexity. However, the model is heavily influenced by the quality and quantity of the gene expression data. In addition, there are inconsistencies between GRNs and the gene expression data. Thus, this review also discussed future work based on GRNs and gene expression data.</p></div>","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":null,"pages":null},"PeriodicalIF":5.2,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S131915782400209X/pdfft?md5=586fad615ff045ca1e3c6db05e62f27d&pid=1-s2.0-S131915782400209X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141607662","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of King Saud University-Computer and Information Sciences
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1