首页 > 最新文献

Journal of King Saud University-Computer and Information Sciences最新文献

英文 中文
Abnormal lower limb posture recognition based on spatial gait feature dynamic threshold detection 基于空间步态特征动态阈值检测的异常下肢姿势识别
IF 5.2 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-08-22 DOI: 10.1016/j.jksuci.2024.102161

Lower limb rehabilitation training often involves the use of assistive standing devices. However, elderly individuals frequently experience reduced exercise effectiveness or suffer muscle injuries when utilizing these devices. The ability to recognize abnormal lower limb postures can significantly enhance training efficiency and minimize the risk of injury. To address this, we propose a model based on dynamic threshold detection of spatial gait features to identify such abnormal postures. A human-assisted standing rehabilitation device platform was developed to build a lower limb gait depth dataset. RGB data is employed for keypoint detection, enabling the establishment of a 3D lower limb posture recognition model that extracts gait, time, spatial features, and keypoints. The predicted joint angles, stride length, and step frequency demonstrate errors of 4 %, 8 %, and 1.3 %, respectively, with an average confidence of 0.95 for 3D key points. We employed the WOA-BP neural network to develop a dynamic threshold algorithm based on gait features and propose a model for recognizing abnormal postures. Compared to other models, our model achieves a 96 % accuracy rate in recognizing abnormal postures, with a recall rate of 83 % and an F1 score of 90 %. ROC curve analysis and AUC values reveal that the WOA-BP algorithm performs farthest from the pure chance line, with the highest AUC value of 0.89, indicating its superior performance over other models. Experimental results demonstrate that this model possesses a strong capability in recognizing abnormal lower limb postures, encouraging patients to correct these postures, thereby reducing muscle injuries and improving exercise effectiveness.

下肢康复训练通常需要使用辅助站立装置。然而,老年人在使用这些设备时经常会出现运动效果下降或肌肉受伤的情况。识别异常下肢姿势的能力可显著提高训练效率,并将受伤风险降至最低。为此,我们提出了一种基于空间步态特征动态阈值检测的模型来识别此类异常姿势。我们开发了一个人体辅助站立康复设备平台,以建立下肢步态深度数据集。利用 RGB 数据进行关键点检测,从而建立了一个可提取步态、时间、空间特征和关键点的三维下肢姿势识别模型。预测的关节角度、步长和步频误差分别为 4%、8% 和 1.3%,三维关键点的平均置信度为 0.95。我们利用 WOA-BP 神经网络开发了一种基于步态特征的动态阈值算法,并提出了一种识别异常姿势的模型。与其他模型相比,我们的模型识别异常姿势的准确率达到 96%,召回率为 83%,F1 分数为 90%。ROC 曲线分析和 AUC 值显示,WOA-BP 算法的表现距离纯机会线最远,最高 AUC 值为 0.89,表明其性能优于其他模型。实验结果表明,该模型具有很强的识别异常下肢姿势的能力,可鼓励患者纠正这些姿势,从而减少肌肉损伤,提高锻炼效果。
{"title":"Abnormal lower limb posture recognition based on spatial gait feature dynamic threshold detection","authors":"","doi":"10.1016/j.jksuci.2024.102161","DOIUrl":"10.1016/j.jksuci.2024.102161","url":null,"abstract":"<div><p>Lower limb rehabilitation training often involves the use of assistive standing devices. However, elderly individuals frequently experience reduced exercise effectiveness or suffer muscle injuries when utilizing these devices. The ability to recognize abnormal lower limb postures can significantly enhance training efficiency and minimize the risk of injury. To address this, we propose a model based on dynamic threshold detection of spatial gait features to identify such abnormal postures. A human-assisted standing rehabilitation device platform was developed to build a lower limb gait depth dataset. RGB data is employed for keypoint detection, enabling the establishment of a 3D lower limb posture recognition model that extracts gait, time, spatial features, and keypoints. The predicted joint angles, stride length, and step frequency demonstrate errors of 4 %, 8 %, and 1.3 %, respectively, with an average confidence of 0.95 for 3D key points. We employed the WOA-BP neural network to develop a dynamic threshold algorithm based on gait features and propose a model for recognizing abnormal postures. Compared to other models, our model achieves a 96 % accuracy rate in recognizing abnormal postures, with a recall rate of 83 % and an F1 score of 90 %. ROC curve analysis and AUC values reveal that the WOA-BP algorithm performs farthest from the pure chance line, with the highest AUC value of 0.89, indicating its superior performance over other models. Experimental results demonstrate that this model possesses a strong capability in recognizing abnormal lower limb postures, encouraging patients to correct these postures, thereby reducing muscle injuries and improving exercise effectiveness.</p></div>","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":null,"pages":null},"PeriodicalIF":5.2,"publicationDate":"2024-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1319157824002507/pdfft?md5=27cec39130c542af88b8b1f0132833cd&pid=1-s2.0-S1319157824002507-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142088746","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A formal specification language and automatic modeling method of asset securitization contract 资产证券化合同的形式化规范语言和自动建模方法
IF 5.2 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-08-21 DOI: 10.1016/j.jksuci.2024.102163

Asset securitization is an important financial derivative involving complicated asset transfer operations. Therefore, digitizing traditional asset securitization contracts will improve efficiency and facilitate reliability verification. Furthermore, accurate and verifiable requirement description is essential for collaborative development between financial professionals and software engineers. A domain specific language for writing asset securitization contract has been proposed. This solves the problem of difficulty for financial professionals to directly write smart contract by simplifying writing rules. However, due to existing design of the language focused on some simple scenarios, it is insufficient and informal to describe various detailed scenarios. What is more, there are still many reliability issues, such as verifying the correctness of the logical properties of the contract and ensuring the consistency between the contract text and the contract code, within the language in the generation and execution of smart contracts. To overcome the challenges stated above, we extend, simplify and innovate the syntax subset of the domain specific language and name it AS-SC (Asset Securitization – Smart Contract), which can be used by financial professionals to accurately describe requirements. Besides, because formal methods are math-based techniques that describe system properties and can generate programs in a more formal and reliable manner, we propose a semantic consistent code conversion method, named AS2EB, for converting from AS-SC to Event-B, a common and useful formal language. AS2EB method can be used by software engineers to verify requirements. The combination of AS-SC and AS2EB ensures consistency and reliability of the requirements, and reduces the cost of repeated communications and later testing. Taking the credit asset securitization contract as case study, the feasibility and rationality of AS-SC and AS2EB are validated. In addition, by carrying out experiments on three randomly selected real cases in different classic scenarios, we show high-efficiency and reliability of AS2EB method.

资产证券化是一种重要的金融衍生工具,涉及复杂的资产转移操作。因此,将传统的资产证券化合同数字化将提高效率并促进可靠性验证。此外,准确、可验证的需求描述对于金融专业人员和软件工程师之间的合作开发至关重要。有人提出了一种用于编写资产证券化合同的特定领域语言。这通过简化编写规则,解决了金融专业人士难以直接编写智能合约的问题。然而,由于该语言的现有设计侧重于一些简单的场景,在描述各种详细场景时显得不够充分和不正规。此外,在智能合约的生成和执行过程中,该语言还存在许多可靠性问题,如验证合约逻辑属性的正确性、确保合约文本与合约代码的一致性等。为了克服上述挑战,我们对特定领域语言的语法子集进行了扩展、简化和创新,并将其命名为 AS-SC(资产证券化-智能合约),可供金融专业人士准确描述需求。此外,由于形式化方法是基于数学的技术,可以描述系统属性,并能以更形式化、更可靠的方式生成程序,因此我们提出了一种语义一致的代码转换方法,命名为AS2EB,用于将AS-SC转换为常用且有用的形式化语言Event-B。AS2EB 方法可用于软件工程师验证需求。AS-SC 和 AS2EB 的结合确保了需求的一致性和可靠性,降低了反复沟通和后期测试的成本。以信贷资产证券化合同为例,验证了 AS-SC 和 AS2EB 的可行性和合理性。此外,通过对随机抽取的三个不同经典场景的真实案例进行实验,我们展示了 AS2EB 方法的高效性和可靠性。
{"title":"A formal specification language and automatic modeling method of asset securitization contract","authors":"","doi":"10.1016/j.jksuci.2024.102163","DOIUrl":"10.1016/j.jksuci.2024.102163","url":null,"abstract":"<div><p>Asset securitization is an important financial derivative involving complicated asset transfer operations. Therefore, digitizing traditional asset securitization contracts will improve efficiency and facilitate reliability verification. Furthermore, accurate and verifiable requirement description is essential for collaborative development between financial professionals and software engineers. A domain specific language for writing asset securitization contract has been proposed. This solves the problem of difficulty for financial professionals to directly write smart contract by simplifying writing rules. However, due to existing design of the language focused on some simple scenarios, it is insufficient and informal to describe various detailed scenarios. What is more, there are still many reliability issues, such as verifying the correctness of the logical properties of the contract and ensuring the consistency between the contract text and the contract code, within the language in the generation and execution of smart contracts. To overcome the challenges stated above, we extend, simplify and innovate the syntax subset of the domain specific language and name it AS-SC (Asset Securitization – Smart Contract), which can be used by financial professionals to accurately describe requirements. Besides, because formal methods are math-based techniques that describe system properties and can generate programs in a more formal and reliable manner, we propose a semantic consistent code conversion method, named AS2EB, for converting from AS-SC to Event-B, a common and useful formal language. AS2EB method can be used by software engineers to verify requirements. The combination of AS-SC and AS2EB ensures consistency and reliability of the requirements, and reduces the cost of repeated communications and later testing. Taking the credit asset securitization contract as case study, the feasibility and rationality of AS-SC and AS2EB are validated. In addition, by carrying out experiments on three randomly selected real cases in different classic scenarios, we show high-efficiency and reliability of AS2EB method.</p></div>","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":null,"pages":null},"PeriodicalIF":5.2,"publicationDate":"2024-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1319157824002520/pdfft?md5=9af49e4b57c4f2d8d674b3287497b478&pid=1-s2.0-S1319157824002520-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142088599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DAW-FA: Domain-aware adaptive weighting with fine-grain attention for unsupervised MRI harmonization DAW-FA:用于无监督磁共振成像协调的具有细粒度注意力的领域感知自适应加权法
IF 5.2 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-08-21 DOI: 10.1016/j.jksuci.2024.102157

Magnetic resonance (MR) imaging often lacks standardized acquisition protocols across various sites, leading to contrast variations that reduce image quality and hinder automated analysis. MR harmonization improves consistency by integrating data from multiple sources, ensuring reproducible analysis. Recent advances leverage image-to-image translation and disentangled representation learning to decompose anatomical and contrast representations, achieving consistent cross-site harmonization. However, these methods face two significant drawbacks: imbalanced contrast availability during training affects adaptation performance, and insufficient utilization of spatial variability in local anatomical structures limits model adaptability to different sites. To address these challenges, we propose Domain-aware Adaptive Weighting with Fine-Grain Attention (DAW-FA) for Unsupervised MRI Harmonization. DAW-FA incorporates an adaptive weighting mechanism and enhanced self-attention to mitigate MR contrast imbalance during training and account for spatial variability in local anatomical structures. This facilitates robust cross-site harmonization without requiring paired inter-site images. We evaluated DAW-FA on MR datasets with varying scanners and acquisition protocols. Experimental results show DAW-FA outperforms existing methods, with an average increase of 1.92 ± 0.56 in Peak Signal-to-Noise Ratio (PSNR) and 0.023 ± 0.011 in Structural Similarity Index Measure (SSIM). Additionally, we demonstrate DAW-FA’s impact on downstream tasks: Alzheimer’s disease classification and whole-brain segmentation, highlighting its potential clinical relevance.

磁共振(MR)成像在不同部位往往缺乏标准化的采集方案,导致对比度差异,从而降低图像质量并妨碍自动分析。磁共振协调通过整合多个来源的数据来提高一致性,确保分析的可重复性。最近的进展是利用图像到图像的转换和分离表征学习来分解解剖和对比度表征,从而实现一致的跨部位协调。然而,这些方法面临两个重大缺陷:训练过程中对比度可用性的不平衡会影响适应性能,对局部解剖结构的空间变异性利用不足会限制模型对不同部位的适应性。为了应对这些挑战,我们提出了用于无监督磁共振成像协调的领域感知自适应细粒度加权(DAW-FA)。DAW-FA 结合了自适应加权机制和增强型自我注意,以减轻训练过程中磁共振对比度的不平衡,并考虑局部解剖结构的空间变异性。这有助于实现稳健的跨部位协调,而无需配对的部位间图像。我们在不同扫描仪和采集协议的磁共振数据集上对 DAW-FA 进行了评估。实验结果表明,DAW-FA 优于现有方法,峰值信噪比(PSNR)平均提高了 1.92 ± 0.56,结构相似性指数(SSIM)平均提高了 0.023 ± 0.011。此外,我们还展示了 DAW-FA 对下游任务的影响:阿尔茨海默病分类和全脑分割,突出了其潜在的临床意义。
{"title":"DAW-FA: Domain-aware adaptive weighting with fine-grain attention for unsupervised MRI harmonization","authors":"","doi":"10.1016/j.jksuci.2024.102157","DOIUrl":"10.1016/j.jksuci.2024.102157","url":null,"abstract":"<div><p>Magnetic resonance (MR) imaging often lacks standardized acquisition protocols across various sites, leading to contrast variations that reduce image quality and hinder automated analysis. MR harmonization improves consistency by integrating data from multiple sources, ensuring reproducible analysis. Recent advances leverage image-to-image translation and disentangled representation learning to decompose anatomical and contrast representations, achieving consistent cross-site harmonization. However, these methods face two significant drawbacks: imbalanced contrast availability during training affects adaptation performance, and insufficient utilization of spatial variability in local anatomical structures limits model adaptability to different sites. To address these challenges, we propose Domain-aware Adaptive Weighting with Fine-Grain Attention (DAW-FA) for Unsupervised MRI Harmonization. DAW-FA incorporates an adaptive weighting mechanism and enhanced self-attention to mitigate MR contrast imbalance during training and account for spatial variability in local anatomical structures. This facilitates robust cross-site harmonization without requiring paired inter-site images. We evaluated DAW-FA on MR datasets with varying scanners and acquisition protocols. Experimental results show DAW-FA outperforms existing methods, with an average increase of 1.92 ± 0.56 in Peak Signal-to-Noise Ratio (PSNR) and 0.023 ± 0.011 in Structural Similarity Index Measure (SSIM). Additionally, we demonstrate DAW-FA’s impact on downstream tasks: Alzheimer’s disease classification and whole-brain segmentation, highlighting its potential clinical relevance.</p></div>","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":null,"pages":null},"PeriodicalIF":5.2,"publicationDate":"2024-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1319157824002465/pdfft?md5=3acf98b5530f688283d52f1b4e9b2c0d&pid=1-s2.0-S1319157824002465-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142041103","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SARD: Fake news detection based on CLIP contrastive learning and multimodal semantic alignment SARD:基于 CLIP 对比学习和多模态语义配准的假新闻检测
IF 5.2 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-08-14 DOI: 10.1016/j.jksuci.2024.102160

The automatic detection of multimodal fake news can be used to effectively identify potential risks in cyberspace. Most of the existing multimodal fake news detection methods focus on fully exploiting textual and visual features in news content, thus neglecting the full utilization of news social context features that play an important role in improving fake news detection. To this end, we propose a new fake news detection method based on CLIP contrastive learning and multimodal semantic alignment (SARD). SARD leverages cutting-edge multimodal learning techniques, such as CLIP, and robust cross-modal contrastive learning methods to integrate features of news-oriented heterogeneous information networks (N-HIN) with multi-level textual and visual features into a unified framework for the first time. This framework not only achieves cross-modal alignment between deep textual and visual features but also considers cross-modal associations and semantic alignments across different modalities. Furthermore, SARD enhances fake news detection by aligning semantic features between news content and N-HIN features, an aspect largely overlooked by existing methods. We test and evaluate SARD on three real-world datasets. Experimental results demonstrate that SARD significantly outperforms the twelve state-of-the-art competitors in fake news detection, with an average improvement of 2.89% in Mac.F1 score and 2.13% in accuracy compared to the leading baseline models across three datasets.

多模态假新闻的自动检测可用于有效识别网络空间的潜在风险。现有的多模态假新闻检测方法大多侧重于充分利用新闻内容中的文本和视觉特征,从而忽视了充分利用新闻社会语境特征,而社会语境特征在提高假新闻检测能力方面发挥着重要作用。为此,我们提出了一种基于 CLIP 对比学习和多模态语义对齐(SARD)的新型假新闻检测方法。SARD 利用前沿的多模态学习技术(如 CLIP)和稳健的跨模态对比学习方法,首次将面向新闻的异构信息网络(N-HIN)特征与多层次的文本和视觉特征整合到一个统一的框架中。该框架不仅实现了深度文本和视觉特征之间的跨模态对齐,还考虑了不同模态之间的跨模态关联和语义对齐。此外,SARD 还通过对齐新闻内容和 N-HIN 特征之间的语义特征来增强假新闻检测,而现有方法在很大程度上忽略了这一点。我们在三个真实世界的数据集上对 SARD 进行了测试和评估。实验结果表明,在假新闻检测方面,SARD 明显优于 12 个最先进的竞争对手,在三个数据集上,与领先的基线模型相比,Mac.F1 分数平均提高了 2.89%,准确率平均提高了 2.13%。
{"title":"SARD: Fake news detection based on CLIP contrastive learning and multimodal semantic alignment","authors":"","doi":"10.1016/j.jksuci.2024.102160","DOIUrl":"10.1016/j.jksuci.2024.102160","url":null,"abstract":"<div><p>The automatic detection of multimodal fake news can be used to effectively identify potential risks in cyberspace. Most of the existing multimodal fake news detection methods focus on fully exploiting textual and visual features in news content, thus neglecting the full utilization of news social context features that play an important role in improving fake news detection. To this end, we propose a new fake news detection method based on CLIP contrastive learning and multimodal semantic alignment (SARD). SARD leverages cutting-edge multimodal learning techniques, such as CLIP, and robust cross-modal contrastive learning methods to integrate features of news-oriented heterogeneous information networks (N-HIN) with multi-level textual and visual features into a unified framework for the first time. This framework not only achieves cross-modal alignment between deep textual and visual features but also considers cross-modal associations and semantic alignments across different modalities. Furthermore, SARD enhances fake news detection by aligning semantic features between news content and N-HIN features, an aspect largely overlooked by existing methods. We test and evaluate SARD on three real-world datasets. Experimental results demonstrate that SARD significantly outperforms the twelve state-of-the-art competitors in fake news detection, with an average improvement of 2.89% in Mac.F1 score and 2.13% in accuracy compared to the leading baseline models across three datasets.</p></div>","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":null,"pages":null},"PeriodicalIF":5.2,"publicationDate":"2024-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1319157824002490/pdfft?md5=497eb195281148df13643994f201fe62&pid=1-s2.0-S1319157824002490-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142076508","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Anomalous behavior detection based on optimized graph embedding representation in social networks 基于优化图嵌入表示的社交网络异常行为检测
IF 5.2 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-08-13 DOI: 10.1016/j.jksuci.2024.102158

Anomalous behaviors in social networks can lead to privacy leaks and the spread of false information. In this paper, we propose an anomalous behavior detection method based on optimized graph embedding representation. Specifically, the user behavior logs are first extracted into a social network user behavior temporal knowledge graph, based on which the graph embedding representation method is used to transform the network topology and temporal information in the user behavior knowledge graph into structural embedding vectors and temporal information embedding vectors, and the hybrid attention mechanism is used to merge the two types of vectors to obtain the final entity embedding to complete the prediction and complementation of the temporal knowledge graph of user behavior. We use graph neural networks, which use the temporal information of user behaviors as a time constraint and capture both user behavioral and semantic information. It converts the two parts of information into vectors for concatenation and linear transformation to obtain a comprehensive representation vector of the whole subgraph, as well as joint deep learning models to evaluate abnormal behavior. Finally, we perform experiments on the Yelp dataset to validate that our method achieves a 9.56% improvement in the F1-score.

社交网络中的异常行为会导致隐私泄露和虚假信息传播。本文提出了一种基于优化图嵌入表示的异常行为检测方法。具体来说,首先将用户行为日志提取为社交网络用户行为时态知识图谱,在此基础上利用图嵌入表示方法将用户行为知识图谱中的网络拓扑和时态信息转化为结构嵌入向量和时态信息嵌入向量,并利用混合注意力机制将两类向量合并得到最终的实体嵌入,完成对用户行为时态知识图谱的预测和补充。我们利用图神经网络,将用户行为的时间信息作为时间约束,同时捕捉用户行为信息和语义信息。它将两部分信息转化为向量进行串联和线性变换,从而得到整个子图的综合表示向量,并联合深度学习模型对异常行为进行评估。最后,我们在 Yelp 数据集上进行了实验,验证了我们的方法在 F1 分数上实现了 9.56% 的提升。
{"title":"Anomalous behavior detection based on optimized graph embedding representation in social networks","authors":"","doi":"10.1016/j.jksuci.2024.102158","DOIUrl":"10.1016/j.jksuci.2024.102158","url":null,"abstract":"<div><p>Anomalous behaviors in social networks can lead to privacy leaks and the spread of false information. In this paper, we propose an anomalous behavior detection method based on optimized graph embedding representation. Specifically, the user behavior logs are first extracted into a social network user behavior temporal knowledge graph, based on which the graph embedding representation method is used to transform the network topology and temporal information in the user behavior knowledge graph into structural embedding vectors and temporal information embedding vectors, and the hybrid attention mechanism is used to merge the two types of vectors to obtain the final entity embedding to complete the prediction and complementation of the temporal knowledge graph of user behavior. We use graph neural networks, which use the temporal information of user behaviors as a time constraint and capture both user behavioral and semantic information. It converts the two parts of information into vectors for concatenation and linear transformation to obtain a comprehensive representation vector of the whole subgraph, as well as joint deep learning models to evaluate abnormal behavior. Finally, we perform experiments on the Yelp dataset to validate that our method achieves a 9.56% improvement in the F1-score.</p></div>","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":null,"pages":null},"PeriodicalIF":5.2,"publicationDate":"2024-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1319157824002477/pdfft?md5=05d482d90b47cc00a3f0c9a6ac74bdda&pid=1-s2.0-S1319157824002477-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142048668","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Efficient Wear-Leveling-Aware Data Placement for LSM-Tree based key-value store on ZNS SSDs 基于 LSM 树的键值存储在 ZNS SSD 上的高效损耗平级感知数据放置
IF 5.2 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-08-08 DOI: 10.1016/j.jksuci.2024.102156

Emerging Zoned Namespace (ZNS) is a new-style Solid State Drive (SSD) that manages data in a zoned manner, which can achieve higher performance by strictly obeying the sequential write mode in each zone and alleviating the redundant overhead of garbage collections. Unfortunately, flash memory usually has a serious problem with limited program/erase cycles. Meanwhile, inappropriate data placement strategy of storage systems can lead to imbalanced wear among zones, severely reducing the lifespan of ZNS SSDs. In this paper, we propose a Wear-Leveling-Aware Data Placement (WADP) to solve this problem with negligible performance cost. First, WADP employs a wear-aware empty zone allocation algorithm to quantify the resets of zones and choose the less-worn zone for each allocation. Second, to prevent long-term zone occupation of infrequently written data (namely cold data), we propose a wear-leveling cold zone monitoring mechanism to identify cold zones dynamically. Finally, WADP adopts a real-time I/O pressure-aware data migration mechanism to adaptively migrate cold data for achieving wear-leveling among zones. We implement the proposed WADP in ZenFS and evaluate it with widely used workloads. Compared with state-of-the-art solutions, i.e., LIZA and FAR, the experimental results show that WADP can significantly reduce the standard deviation of zone resets while maintaining decent performance.

新出现的分区命名空间(ZNS)是一种新型固态硬盘(SSD),它以分区方式管理数据,通过严格遵守各区的顺序写入模式,减轻垃圾回收的冗余开销,从而实现更高的性能。遗憾的是,闪存通常存在程序/擦除周期有限的严重问题。同时,存储系统不恰当的数据放置策略会导致各区之间的磨损不平衡,严重缩短 ZNS SSD 的使用寿命。在本文中,我们提出了一种可感知磨损的数据放置(WADP)来解决这一问题,其性能成本几乎可以忽略不计。首先,WADP 采用磨损感知空区分配算法来量化区的重置,并为每次分配选择磨损较少的区。其次,为防止不经常写入的数据(即冷数据)长期占用区域,我们提出了一种损耗水平冷区监控机制,以动态识别冷区。最后,WADP 采用实时 I/O 压力感知数据迁移机制,自适应地迁移冷数据,以实现区域间的损耗均衡。我们在 ZenFS 中实现了所提出的 WADP,并用广泛使用的工作负载对其进行了评估。实验结果表明,与最先进的解决方案(即 LIZA 和 FAR)相比,WADP 可以显著降低区域重置的标准偏差,同时保持良好的性能。
{"title":"Efficient Wear-Leveling-Aware Data Placement for LSM-Tree based key-value store on ZNS SSDs","authors":"","doi":"10.1016/j.jksuci.2024.102156","DOIUrl":"10.1016/j.jksuci.2024.102156","url":null,"abstract":"<div><p>Emerging Zoned Namespace (ZNS) is a new-style Solid State Drive (SSD) that manages data in a zoned manner, which can achieve higher performance by strictly obeying the sequential write mode in each zone and alleviating the redundant overhead of garbage collections. Unfortunately, flash memory usually has a serious problem with limited program/erase cycles. Meanwhile, inappropriate data placement strategy of storage systems can lead to imbalanced wear among zones, severely reducing the lifespan of ZNS SSDs. In this paper, we propose a Wear-Leveling-Aware Data Placement (WADP) to solve this problem with negligible performance cost. First, WADP employs a wear-aware empty zone allocation algorithm to quantify the resets of zones and choose the less-worn zone for each allocation. Second, to prevent long-term zone occupation of infrequently written data (namely cold data), we propose a wear-leveling cold zone monitoring mechanism to identify cold zones dynamically. Finally, WADP adopts a real-time I/O pressure-aware data migration mechanism to adaptively migrate cold data for achieving wear-leveling among zones. We implement the proposed WADP in ZenFS and evaluate it with widely used workloads. Compared with state-of-the-art solutions, i.e., LIZA and FAR, the experimental results show that WADP can significantly reduce the standard deviation of zone resets while maintaining decent performance.</p></div>","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":null,"pages":null},"PeriodicalIF":5.2,"publicationDate":"2024-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1319157824002453/pdfft?md5=b3f5e8288e8205e799d78965f416b571&pid=1-s2.0-S1319157824002453-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142041102","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Structure recovery from single omnidirectional image with distortion-aware learning 利用失真感知学习从单幅全向图像中恢复结构
IF 5.2 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-08-08 DOI: 10.1016/j.jksuci.2024.102151

Recovering structures from images with 180 or 360 FoV is pivotal in computer vision and computational photography, particularly for VR/AR/MR and autonomous robotics applications. Due to varying distortions and the complexity of indoor scenes, recovering flexible structures from a single image is challenging. We introduce OmniSRNet, a comprehensive deep learning framework that merges distortion-aware learning with bidirectional LSTM. Utilizing a curated dataset with optimized panorama and expanded fisheye images, our framework features a distortion-aware module (DAM) for extracting features and a horizontal and vertical step module (HVSM) of LSTM for contextual predictions. OmniSRNet excels in applications such as VR-based house viewing and MR-based video surveillance, achieving leading results on cuboid and non-cuboid datasets. The code and dataset can be accessed at https://github.com/mmlph/OmniSRNet/.

从 180∘ 或 360∘ FoV 的图像中恢复结构是计算机视觉和计算摄影的关键,尤其是在 VR/AR/MR 和自主机器人应用中。由于室内场景的畸变和复杂性各不相同,从单张图像中恢复灵活的结构具有挑战性。我们介绍了 OmniSRNet,这是一种综合深度学习框架,它将失真感知学习与双向 LSTM 相结合。利用包含优化全景和扩展鱼眼图像的数据集,我们的框架具有用于提取特征的失真感知模块(DAM)和用于上下文预测的 LSTM 水平和垂直阶跃模块(HVSM)。OmniSRNet 在基于 VR 的房屋查看和基于 MR 的视频监控等应用中表现出色,在立方体和非立方体数据集上取得了领先的结果。代码和数据集可通过 https://github.com/mmlph/OmniSRNet/ 访问。
{"title":"Structure recovery from single omnidirectional image with distortion-aware learning","authors":"","doi":"10.1016/j.jksuci.2024.102151","DOIUrl":"10.1016/j.jksuci.2024.102151","url":null,"abstract":"<div><p>Recovering structures from images with 180<span><math><msup><mrow></mrow><mrow><mo>∘</mo></mrow></msup></math></span> or 360<span><math><msup><mrow></mrow><mrow><mo>∘</mo></mrow></msup></math></span> FoV is pivotal in computer vision and computational photography, particularly for VR/AR/MR and autonomous robotics applications. Due to varying distortions and the complexity of indoor scenes, recovering flexible structures from a single image is challenging. We introduce OmniSRNet, a comprehensive deep learning framework that merges distortion-aware learning with bidirectional LSTM. Utilizing a curated dataset with optimized panorama and expanded fisheye images, our framework features a distortion-aware module (DAM) for extracting features and a horizontal and vertical step module (HVSM) of LSTM for contextual predictions. OmniSRNet excels in applications such as VR-based house viewing and MR-based video surveillance, achieving leading results on cuboid and non-cuboid datasets. The code and dataset can be accessed at <span><span>https://github.com/mmlph/OmniSRNet/</span><svg><path></path></svg></span>.</p></div>","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":null,"pages":null},"PeriodicalIF":5.2,"publicationDate":"2024-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1319157824002404/pdfft?md5=7e463774b7098668fef54fdff2ad3e21&pid=1-s2.0-S1319157824002404-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142013028","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Performance analysis of cloud resource allocation scheme with virtual machine inter-group asynchronous failure 带有虚拟机组间异步故障的云资源分配方案的性能分析
IF 5.2 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-08-07 DOI: 10.1016/j.jksuci.2024.102155

The recent rapid expansion of cloud computing has led to the prominence of Cloud Data Center (CDC) emerging. However, user requests’ waiting time might be greatly increased for a single physical machine (PM) in the CDC. We provide a cloud resource allocation scheme with virtual machine (VM) inter-group asynchronous failure. This method improves requests’ throughput and reduces wait time of requests. In particular, two PMs with different service rates for mapping multiple VMs are deployed in order to equally distribute cloud users’ requests, and we assume that the two PMs will fail and repair at different probabilities. A finite cache is also introduced to reduce the requests’ blocking rate. We model the VMs and user requests and create a 3-dimensional Markov chain (3DMC) to gauge the requests’ performance metrics. Numerical experiments are performed to obtain multiple performance metrics graphs for the requests. By comparing our scheme with the traditional cloud resource allocation scheme that involves synchronization failure in VM, we find that our scheme has an improvement in throughput, and each scheme has advantages and disadvantages in blocking rate of requests.

近年来,云计算的迅速发展使云数据中心(CDC)崭露头角。然而,对于云数据中心中的单个物理机(PM)来说,用户请求的等待时间可能会大大增加。我们提供了一种虚拟机(VM)组间异步故障的云资源分配方案。这种方法提高了请求的吞吐量,减少了请求的等待时间。特别是,为了平均分配云用户的请求,我们部署了两个具有不同服务速率的 PM 来映射多个虚拟机,并假设这两个 PM 将以不同的概率发生故障和修复。我们还引入了有限缓存,以降低请求阻塞率。我们对虚拟机和用户请求进行建模,并创建一个三维马尔可夫链(3DMC)来衡量请求的性能指标。通过数值实验,我们获得了请求的多个性能指标图。通过将我们的方案与涉及虚拟机同步故障的传统云资源分配方案进行比较,我们发现我们的方案在吞吐量方面有所改善,而且两种方案在请求阻塞率方面各有利弊。
{"title":"Performance analysis of cloud resource allocation scheme with virtual machine inter-group asynchronous failure","authors":"","doi":"10.1016/j.jksuci.2024.102155","DOIUrl":"10.1016/j.jksuci.2024.102155","url":null,"abstract":"<div><p>The recent rapid expansion of cloud computing has led to the prominence of Cloud Data Center (CDC) emerging. However, user requests’ waiting time might be greatly increased for a single physical machine (PM) in the CDC. We provide a cloud resource allocation scheme with virtual machine (VM) inter-group asynchronous failure. This method improves requests’ throughput and reduces wait time of requests. In particular, two PMs with different service rates for mapping multiple VMs are deployed in order to equally distribute cloud users’ requests, and we assume that the two PMs will fail and repair at different probabilities. A finite cache is also introduced to reduce the requests’ blocking rate. We model the VMs and user requests and create a 3-dimensional Markov chain (3DMC) to gauge the requests’ performance metrics. Numerical experiments are performed to obtain multiple performance metrics graphs for the requests. By comparing our scheme with the traditional cloud resource allocation scheme that involves synchronization failure in VM, we find that our scheme has an improvement in throughput, and each scheme has advantages and disadvantages in blocking rate of requests.</p></div>","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":null,"pages":null},"PeriodicalIF":5.2,"publicationDate":"2024-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1319157824002441/pdfft?md5=d0b96a172006c37607e17d7e394616cf&pid=1-s2.0-S1319157824002441-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141993465","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LDNet: High Accuracy Fish Counting Framework using Limited training samples with Density map generation Network LDNet:利用密度图生成网络的有限训练样本实现高精度鱼类计数框架
IF 5.2 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-08-07 DOI: 10.1016/j.jksuci.2024.102143

Fish counting is crucial in fish farming. Density map-based fish counting methods hold promise for fish counting in high-density scenarios; however, they suffer from ineffective ground truth density map generation. High labeling complexities and disturbance to fish growth during data collection are also challenging to mitigate. To address these issues, LDNet, a versatile network with attention implemented is introduced in this study. An imbalanced Optimal Transport (OT)-based loss function was used to effectively supervise density map generation. Additionally, an Image Manipulation-Based Data Augmentation (IMBDA) strategy was applied to simulate training data from diverse scenarios in fixed viewpoints in order to build a model that is robust to different environmental changes. Leveraging a limited number of training samples, our approach achieved notable performances with an 8.27 MAE, 9.97 RMSE, and 99.01% Accuracy on our self-curated Fish Count-824 dataset. Impressively, our method also demonstrated superior counting performances on both vehicle count datasets CARPK and PURPK+, and Penaeus_1k Penaeus Larvae dataset when only 5%–10% of the training data was used. These outcomes compellingly showcased our proposed approach with a wide applicability potential across various cases. This innovative approach can potentially contribute to aquaculture management and ecological preservation through counting fish accurately.

鱼类计数在养鱼业中至关重要。基于密度图的鱼类计数方法有望用于高密度情况下的鱼类计数;然而,这些方法存在无法有效生成地面实况密度图的问题。在数据采集过程中,标记复杂度高和对鱼类生长的干扰也是难以解决的问题。为了解决这些问题,本研究引入了一种具有注意力的多功能网络 LDNet。基于最优传输(OT)的不平衡损失函数被用来有效监督密度图的生成。此外,还采用了基于图像处理的数据增强(IMBDA)策略,在固定视角下模拟来自不同场景的训练数据,以建立一个对不同环境变化具有鲁棒性的模型。利用有限的训练样本,我们的方法在自编的鱼类计数-824 数据集上取得了显著的性能,最大误差为 8.27,均方根误差为 9.97,准确率为 99.01%。令人印象深刻的是,我们的方法还在车辆计数数据集 CARPK 和 PURPK+ 以及 Penaeus_1k Penaeus Larvae 数据集(仅使用 5%-10%的训练数据)上表现出卓越的计数性能。这些结果充分展示了我们提出的方法在各种情况下的广泛适用性。这种创新方法可以通过准确计数鱼类,为水产养殖管理和生态保护做出潜在贡献。
{"title":"LDNet: High Accuracy Fish Counting Framework using Limited training samples with Density map generation Network","authors":"","doi":"10.1016/j.jksuci.2024.102143","DOIUrl":"10.1016/j.jksuci.2024.102143","url":null,"abstract":"<div><p>Fish counting is crucial in fish farming. Density map-based fish counting methods hold promise for fish counting in high-density scenarios; however, they suffer from ineffective ground truth density map generation. High labeling complexities and disturbance to fish growth during data collection are also challenging to mitigate. To address these issues, LDNet, a versatile network with attention implemented is introduced in this study. An imbalanced Optimal Transport (OT)-based loss function was used to effectively supervise density map generation. Additionally, an Image Manipulation-Based Data Augmentation (IMBDA) strategy was applied to simulate training data from diverse scenarios in fixed viewpoints in order to build a model that is robust to different environmental changes. Leveraging a limited number of training samples, our approach achieved notable performances with an 8.27 MAE, 9.97 RMSE, and 99.01% Accuracy on our self-curated Fish Count-824 dataset. Impressively, our method also demonstrated superior counting performances on both vehicle count datasets CARPK and PURPK+, and Penaeus_1k Penaeus Larvae dataset when only 5%–10% of the training data was used. These outcomes compellingly showcased our proposed approach with a wide applicability potential across various cases. This innovative approach can potentially contribute to aquaculture management and ecological preservation through counting fish accurately.</p></div>","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":null,"pages":null},"PeriodicalIF":5.2,"publicationDate":"2024-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1319157824002325/pdfft?md5=ec92694818fa8a8041843f53d8c6b66e&pid=1-s2.0-S1319157824002325-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141979572","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Leveraging syntax-aware models and triaffine interactions for nominal compound chain extraction 利用语法感知模型和三石蜡相互作用提取名词化合物链
IF 5.2 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-08-07 DOI: 10.1016/j.jksuci.2024.102153

Recently, Nominal Compound Chain Extraction (NCCE) has been proposed to detect related mentions in a document to improve understanding of the document’s topic. NCCE involves longer span detection and more complicated rules for relation decisions, making it more difficult than previous chain extraction tasks, such as coreference resolution. Current methods achieve certain progress on the NCCE task, but they suffer from insufficient syntax information utilization and incomplete mention relation mining, which are helpful for NCCE. To fill these gaps, we propose a syntax-guided model using a triaffine interaction to improve the performance of the NCCE task. Instead of solely relying on the text information to detect compound mentions, we also utilize the noun-phrase (NP) boundary information in constituency trees to incorporate prior boundary knowledge. In addition, we use biaffine and triaffine operations to mine the mention interactions in the local and global context of a document. To show the effectiveness of our methods, we conduct a series of experiments on a human-annotated NCCE dataset. Experimental results show that our model significantly outperforms the baseline systems. Moreover, in-depth analyses reveal the effect of utilizing syntactic information and mention interactions in the local and global contexts.

最近,有人提出了名词复合链提取(NCCE),用于检测文档中的相关提及,以提高对文档主题的理解。NCCE 涉及更长的跨度检测和更复杂的关系判定规则,因此比以往的链提取任务(如核心参照解析)更加困难。目前的方法在 NCCE 任务上取得了一定的进展,但也存在语法信息利用不足和提及关系挖掘不完整等问题,而这些问题对 NCCE 都有帮助。为了弥补这些不足,我们提出了一种语法引导模型,利用三方交互来提高 NCCE 任务的性能。我们不再单纯依赖文本信息来检测复合提及,而是还利用选区树中的名词短语(NP)边界信息来纳入先验边界知识。此外,我们还使用双峰和三峰运算来挖掘文档局部和全局上下文中的提及交互。为了证明我们的方法的有效性,我们在人工标注的 NCCE 数据集上进行了一系列实验。实验结果表明,我们的模型明显优于基线系统。此外,深入分析揭示了在局部和全局上下文中利用句法信息和提及交互的效果。
{"title":"Leveraging syntax-aware models and triaffine interactions for nominal compound chain extraction","authors":"","doi":"10.1016/j.jksuci.2024.102153","DOIUrl":"10.1016/j.jksuci.2024.102153","url":null,"abstract":"<div><p>Recently, Nominal Compound Chain Extraction (NCCE) has been proposed to detect related mentions in a document to improve understanding of the document’s topic. NCCE involves longer span detection and more complicated rules for relation decisions, making it more difficult than previous chain extraction tasks, such as coreference resolution. Current methods achieve certain progress on the NCCE task, but they suffer from insufficient syntax information utilization and incomplete mention relation mining, which are helpful for NCCE. To fill these gaps, we propose a syntax-guided model using a triaffine interaction to improve the performance of the NCCE task. Instead of solely relying on the text information to detect compound mentions, we also utilize the noun-phrase (NP) boundary information in constituency trees to incorporate prior boundary knowledge. In addition, we use biaffine and triaffine operations to mine the mention interactions in the local and global context of a document. To show the effectiveness of our methods, we conduct a series of experiments on a human-annotated NCCE dataset. Experimental results show that our model significantly outperforms the baseline systems. Moreover, in-depth analyses reveal the effect of utilizing syntactic information and mention interactions in the local and global contexts.</p></div>","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":null,"pages":null},"PeriodicalIF":5.2,"publicationDate":"2024-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1319157824002428/pdfft?md5=68d28a739630245dadca6d14bfb1c2d3&pid=1-s2.0-S1319157824002428-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141984671","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of King Saud University-Computer and Information Sciences
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1