首页 > 最新文献

Information Systems最新文献

英文 中文
On the cognitive and behavioral effects of abstraction and fragmentation in modularized process models 论模块化流程模型中抽象化和碎片化的认知和行为效应
IF 3 2区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-07-06 DOI: 10.1016/j.is.2024.102424

Process model comprehension is essential for a variety of technical and managerial tasks. To facilitate comprehension, process models are often divided into subprocesses when they reach a certain size. However, depending on the task type this can either support or impede comprehension. To investigate this hypothesis, we conduct a comprehensive eye-tracking study, where we test two different types of comprehension tasks. These are local tasks focusing on a single subprocess, thereby benefiting from abstraction (i.e., irrelevant information is hidden), and global tasks comprising multiple subprocesses, thereby also benefiting from abstraction but impeded by fragmentation (i.e., relevant information is distributed across multiple fragments). Our subsequent analysis at task (coarse-grained) and phase (fine-grained) levels confirms the opposing effects of abstraction and fragmentation. For global tasks, we observe lower task comprehension, higher cognitive load, as well as more complex search and inference behaviors, when compared to local ones. An additional qualitative analysis of search and inference phases, based on process maps and time series, provides additional insights into the evolution of information processing and confirms the differences between the two task types. The fine-grained analysis at the phase level is based on a novel research method, allowing to clearly separate information search from information inference. We provide an extensive validation of this research method. The outcome of this work provides a more thorough understanding of the effects of fragmentation, in the context of modularized process models, at a coarse-grained level as well as at a fine-grained level, allowing for the development of task- and user-centric support, and opening up future research opportunities to further investigate information processing during process comprehension.

流程模型的理解对于各种技术和管理任务至关重要。为了便于理解,当流程模型达到一定规模时,通常会将其划分为子流程。然而,根据任务类型的不同,这既可能支持理解,也可能阻碍理解。为了研究这一假设,我们进行了一项全面的眼动跟踪研究,测试了两种不同类型的理解任务。这两类任务分别是局部任务和全局任务,前者侧重于单个子过程,从而受益于抽象化(即无关信息被隐藏),后者包括多个子过程,从而也受益于抽象化,但受到碎片化的阻碍(即相关信息分布在多个碎片中)。我们随后在任务(粗粒度)和阶段(细粒度)层面上进行的分析证实了抽象化和碎片化的对立效应。与局部任务相比,我们观察到全局任务的理解能力较低,认知负荷较高,搜索和推理行为也更为复杂。基于过程图和时间序列对搜索和推理阶段进行的额外定性分析,为信息处理的演变提供了更多见解,并证实了两种任务类型之间的差异。阶段层面的精细分析基于一种新颖的研究方法,能够明确区分信息搜索和信息推理。我们对这一研究方法进行了广泛的验证。这项工作的成果让我们在模块化流程模型的背景下,从粗粒度和细粒度两个层面更透彻地了解了碎片化的影响,从而可以开发以任务和用户为中心的支持,并为未来进一步研究流程理解过程中的信息处理开辟了研究机会。
{"title":"On the cognitive and behavioral effects of abstraction and fragmentation in modularized process models","authors":"","doi":"10.1016/j.is.2024.102424","DOIUrl":"10.1016/j.is.2024.102424","url":null,"abstract":"<div><p>Process model comprehension is essential for a variety of technical and managerial tasks. To facilitate comprehension, process models are often divided into subprocesses when they reach a certain size. However, depending on the task type this can either support or impede comprehension. To investigate this hypothesis, we conduct a comprehensive eye-tracking study, where we test two different types of comprehension tasks. These are local tasks focusing on a single subprocess, thereby benefiting from abstraction (i.e., irrelevant information is hidden), and global tasks comprising multiple subprocesses, thereby also benefiting from abstraction but impeded by fragmentation (i.e., relevant information is distributed across multiple fragments). Our subsequent analysis at task (coarse-grained) and phase (fine-grained) levels confirms the opposing effects of abstraction and fragmentation. For global tasks, we observe lower task comprehension, higher cognitive load, as well as more complex search and inference behaviors, when compared to local ones. An additional qualitative analysis of search and inference phases, based on process maps and time series, provides additional insights into the evolution of information processing and confirms the differences between the two task types. The fine-grained analysis at the phase level is based on a novel research method, allowing to clearly separate information search from information inference. We provide an extensive validation of this research method. The outcome of this work provides a more thorough understanding of the effects of fragmentation, in the context of modularized process models, at a coarse-grained level as well as at a fine-grained level, allowing for the development of task- and user-centric support, and opening up future research opportunities to further investigate information processing during process comprehension.</p></div>","PeriodicalId":50363,"journal":{"name":"Information Systems","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2024-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0306437924000826/pdfft?md5=8812da61b4effd68d1674d39afc8cc27&pid=1-s2.0-S0306437924000826-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141707697","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SFTe: Temporal knowledge graphs embedding for future interaction prediction SFTe:用于未来交互预测的时态知识图谱嵌入
IF 3 2区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-07-03 DOI: 10.1016/j.is.2024.102423
Wei Jia , Ruizhe Ma , Weinan Niu , Li Yan , Zongmin Ma

Interaction prediction is a crucial task in the Social Internet of Things (SIoT), serving diverse applications including social network analysis and recommendation systems. However, the dynamic nature of items, users, and their interactions over time poses challenges in effectively capturing and analyzing these changes. Existing interaction prediction models often overlook the temporal aspect and lack the ability to model multi-relational user-item interactions over time. To address these limitations, in this paper, we propose a Structure, Facticity, and Temporal information preservation embedding model (SFTe) to predict future interaction. Our model leverages the advantages of Temporal Knowledge Graphs (TKGs) that can capture both the multi-relations and evolution. We begin by modeling user-item interactions over time by constructing a Temporal Interaction Knowledge Graph (TIKG). We then employ Structure Embedding (SE), Facticity Embedding (FE), and Temporal Embedding (TE) to capture topological structure, facticity consistency, and temporal dependence, respectively. In SE, we focus on preserving the first-order relationships to capture the topological structure of TIKG. In the FE component, given the distinct nature of SIoT, we introduce an attention mechanism to capture the effect of entities with the same additional information for generating subgraph embeddings. Lastly, TE utilizes recurrent neural networks to model the temporal dependencies among subgraphs and capture the evolving dynamics of the interactions over time. Experimental results on standard future interaction prediction demonstrate the superiority of the SFTe model compared with the state-of-the-art methods. Our model effectively addresses the challenges of time-aware interaction prediction, showcasing the potential of TKGs to enhance prediction performance.

交互预测是社交物联网(SIoT)中的一项重要任务,可为社交网络分析和推荐系统等各种应用提供服务。然而,物品、用户及其交互随时间变化的动态性质给有效捕捉和分析这些变化带来了挑战。现有的交互预测模型往往忽略了时间方面,缺乏对用户-物品随时间变化的多关系交互进行建模的能力。为了解决这些局限性,我们在本文中提出了一种结构、行为和时间信息保存模型(SFTe)来预测未来的交互。我们的模型利用了时态知识图谱(TKG)的优势,可以捕捉多关系和演变。首先,我们通过构建时态交互知识图谱(TIKG),对用户与物品的交互进行建模。然后,我们采用结构嵌入(SE)、事实性嵌入(FE)和时间嵌入(TE)来分别捕捉拓扑结构、事实一致性和时间依赖性。在 SE 中,我们重点保留一阶关系,以捕捉 TIKG 的拓扑结构。在 FE 部分,考虑到 SIoT 的独特性,我们引入了一种注意力机制,以捕捉具有相同附加信息的实体对生成子图嵌入的影响。最后,TE 利用递归神经网络对子图之间的时间依赖性进行建模,并捕捉随时间演变的交互动态。标准未来交互预测的实验结果表明,与最先进的方法相比,SFTe 模型更具优势。我们的模型有效地解决了时间感知交互预测的难题,展示了 TKGs 在提高预测性能方面的潜力。
{"title":"SFTe: Temporal knowledge graphs embedding for future interaction prediction","authors":"Wei Jia ,&nbsp;Ruizhe Ma ,&nbsp;Weinan Niu ,&nbsp;Li Yan ,&nbsp;Zongmin Ma","doi":"10.1016/j.is.2024.102423","DOIUrl":"10.1016/j.is.2024.102423","url":null,"abstract":"<div><p>Interaction prediction is a crucial task in the Social Internet of Things (SIoT), serving diverse applications including social network analysis and recommendation systems. However, the dynamic nature of items, users, and their interactions over time poses challenges in effectively capturing and analyzing these changes. Existing interaction prediction models often overlook the temporal aspect and lack the ability to model multi-relational user-item interactions over time. To address these limitations, in this paper, we propose a <strong>S</strong>tructure, <strong>F</strong>acticity, and <strong>T</strong>emporal information preservation <strong>e</strong>mbedding model (SFTe) to predict future interaction. Our model leverages the advantages of Temporal Knowledge Graphs (TKGs) that can capture both the multi-relations and evolution. We begin by modeling user-item interactions over time by constructing a Temporal Interaction Knowledge Graph (TIKG). We then employ Structure Embedding (SE), Facticity Embedding (FE), and Temporal Embedding (TE) to capture topological structure, facticity consistency, and temporal dependence, respectively. In SE, we focus on preserving the first-order relationships to capture the topological structure of TIKG. In the FE component, given the distinct nature of SIoT, we introduce an attention mechanism to capture the effect of entities with the same additional information for generating subgraph embeddings. Lastly, TE utilizes recurrent neural networks to model the temporal dependencies among subgraphs and capture the evolving dynamics of the interactions over time. Experimental results on standard future interaction prediction demonstrate the superiority of the SFTe model compared with the state-of-the-art methods. Our model effectively addresses the challenges of time-aware interaction prediction, showcasing the potential of TKGs to enhance prediction performance.</p></div>","PeriodicalId":50363,"journal":{"name":"Information Systems","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2024-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141567259","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An efficient approach for discovering Graph Entity Dependencies (GEDs) 发现图实体依赖关系(GED)的高效方法
IF 3 2区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-06-28 DOI: 10.1016/j.is.2024.102421
Dehua Liu , Selasi Kwashie , Yidi Zhang , Guangtong Zhou , Michael Bewong , Xiaoying Wu , Xi Guo , Keqing He , Zaiwen Feng

Graph entity dependencies (GEDs) are novel graph constraints, unifying keys and functional dependencies, for property graphs. They have been found useful in many real-world data quality and data management tasks, including fact checking on social media networks and entity resolution. In this paper, we study the discovery problem of GEDs—finding a minimal cover of valid GEDs in a given graph data. We formalise the problem, and propose an effective and efficient approach to overcome major bottlenecks in GED discovery. In particular, we leverage existing graph partitioning algorithms to enable fast GED-scope discovery, and employ effective pruning strategies over the prohibitively large space of candidate dependencies. Furthermore, we define an interestingness measure for GEDs based on the minimum description length principle, to score and rank the mined cover set of GEDs. Finally, we demonstrate the scalability and effectiveness of our GED discovery approach through extensive experiments on real-world benchmark graph data sets; and present the usefulness of the discovered rules in different downstream data quality management applications.

图实体依赖性(GED)是一种新颖的图约束,它统一了属性图的键和功能依赖性。它们在许多现实世界的数据质量和数据管理任务中都很有用,包括社交媒体网络的事实检查和实体解析。在本文中,我们研究了 GED 的发现问题--在给定的图数据中找到有效 GED 的最小覆盖。我们将该问题形式化,并提出了一种有效且高效的方法来克服 GED 发现中的主要瓶颈。特别是,我们利用现有的图分割算法来实现快速的 GED 范围发现,并在令人望而却步的庞大候选依赖空间中采用有效的剪枝策略。此外,我们还根据最小描述长度原则定义了 GED 的趣味性度量,以便对挖掘出的 GED 覆盖集进行评分和排序。最后,我们通过在真实世界基准图数据集上进行大量实验,证明了我们的 GED 发现方法的可扩展性和有效性;并介绍了所发现的规则在不同下游数据质量管理应用中的实用性。
{"title":"An efficient approach for discovering Graph Entity Dependencies (GEDs)","authors":"Dehua Liu ,&nbsp;Selasi Kwashie ,&nbsp;Yidi Zhang ,&nbsp;Guangtong Zhou ,&nbsp;Michael Bewong ,&nbsp;Xiaoying Wu ,&nbsp;Xi Guo ,&nbsp;Keqing He ,&nbsp;Zaiwen Feng","doi":"10.1016/j.is.2024.102421","DOIUrl":"https://doi.org/10.1016/j.is.2024.102421","url":null,"abstract":"<div><p>Graph entity dependencies (GEDs) are novel graph constraints, unifying keys and functional dependencies, for property graphs. They have been found useful in many real-world data quality and data management tasks, including fact checking on social media networks and entity resolution. In this paper, we study the discovery problem of GEDs—finding a minimal cover of valid GEDs in a given graph data. We formalise the problem, and propose an effective and efficient approach to overcome major bottlenecks in GED discovery. In particular, we leverage existing graph partitioning algorithms to enable fast GED-scope discovery, and employ effective pruning strategies over the prohibitively large space of candidate dependencies. Furthermore, we define an interestingness measure for GEDs based on the minimum description length principle, to score and rank the mined cover set of GEDs. Finally, we demonstrate the scalability and effectiveness of our GED discovery approach through extensive experiments on real-world benchmark graph data sets; and present the usefulness of the discovered rules in different downstream data quality management applications.</p></div>","PeriodicalId":50363,"journal":{"name":"Information Systems","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2024-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0306437924000796/pdfft?md5=8af2f9051185a5f57df5320cb4c1b7bd&pid=1-s2.0-S0306437924000796-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141583109","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Analyzing workload trends for boosting triple stores performance 分析工作负载趋势,提高三重存储性能
IF 3 2区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-06-10 DOI: 10.1016/j.is.2024.102420
Ahmed Al-Ghezi, Lena Wiese

The Resource Description Framework (RDF) is widely used to model web data. The scale and complexity of the modeled data emphasized performance challenges on the RDF-triple stores. Workload adaption is one important strategy to deal with those challenges on the storage level. Current workload-adaption approaches lack the necessary generalization of the problem and only optimize part of the storage layer with the workload (mostly the replication). This creates a big performance gap within other data structures (e.g. indexes and cache) that could heavily benefit from the same workload adaption strategy. Moreover, the workload statistics are built collectively in most of the current approaches. Thus, the analysis process is unaware of whether workloads’ items are old or recent. However, that does not simulate the temporal trends that exist naturally in user queries which causes the analysis process to lag behind the rapid workload development. We present a novel universal adaption approach to the storage management of a distributed RDF store. The system aims to find optimal data assignments to the different indexes, replications, and join cache within the limited storage space. We present a cost model based on the workload that often contains frequent patterns. The workload is dynamically and continuously analyzed to evaluate predefined rules considering the benefits and costs of all options of assigning data to the storage structures. The objective is to reduce query execution time by letting different data containers compete on the limited storage space. By modeling the workload statistics as time series, we can apply well-known smoothing techniques allowing the importance of the workload to decay over time. That allows the universal adaption to stay tuned with potential changes in the workload trends.

资源描述框架(RDF)被广泛用于网络数据建模。建模数据的规模和复杂性给 RDF 三重存储带来了性能挑战。工作负载自适应是在存储层面应对这些挑战的重要策略之一。目前的工作负载适应方法缺乏对问题的必要概括,只能根据工作负载优化存储层的一部分(主要是复制)。这就在其他数据结构(如索引和高速缓存)中造成了巨大的性能差距,而这些数据结构可以从相同的工作负载适应策略中获益良多。此外,在当前的大多数方法中,工作负载统计数据都是集体建立的。因此,分析过程不知道工作负载的项目是新的还是旧的。然而,这并不能模拟用户查询中自然存在的时间趋势,从而导致分析流程落后于工作负载的快速发展。我们为分布式 RDF 存储的存储管理提出了一种新颖的通用适应方法。该系统的目标是在有限的存储空间内为不同的索引、复制和连接缓存找到最佳的数据分配。我们根据经常包含频繁模式的工作负载提出了一种成本模型。对工作负载进行动态和持续的分析,以评估预先定义的规则,同时考虑将数据分配到存储结构的所有选项的优势和成本。其目的是通过让不同的数据容器在有限的存储空间上竞争来缩短查询执行时间。通过将工作负载统计数据建模为时间序列,我们可以应用著名的平滑技术,使工作负载的重要性随时间衰减。这样,通用自适应就能根据工作负载趋势的潜在变化进行调整。
{"title":"Analyzing workload trends for boosting triple stores performance","authors":"Ahmed Al-Ghezi,&nbsp;Lena Wiese","doi":"10.1016/j.is.2024.102420","DOIUrl":"10.1016/j.is.2024.102420","url":null,"abstract":"<div><p>The Resource Description Framework (RDF) is widely used to model web data. The scale and complexity of the modeled data emphasized performance challenges on the RDF-triple stores. Workload adaption is one important strategy to deal with those challenges on the storage level. Current workload-adaption approaches lack the necessary generalization of the problem and only optimize part of the storage layer with the workload (mostly the replication). This creates a big performance gap within other data structures (e.g. indexes and cache) that could heavily benefit from the same workload adaption strategy. Moreover, the workload statistics are built collectively in most of the current approaches. Thus, the analysis process is unaware of whether workloads’ items are old or recent. However, that does not simulate the temporal trends that exist naturally in user queries which causes the analysis process to lag behind the rapid workload development. We present a novel universal adaption approach to the storage management of a distributed RDF store. The system aims to find optimal data assignments to the different indexes, replications, and join cache within the limited storage space. We present a cost model based on the workload that often contains frequent patterns. The workload is dynamically and continuously analyzed to evaluate predefined rules considering the benefits and costs of all options of assigning data to the storage structures. The objective is to reduce query execution time by letting different data containers compete on the limited storage space. By modeling the workload statistics as time series, we can apply well-known smoothing techniques allowing the importance of the workload to decay over time. That allows the universal adaption to stay tuned with potential changes in the workload trends.</p></div>","PeriodicalId":50363,"journal":{"name":"Information Systems","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2024-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0306437924000784/pdfft?md5=4a9d8f0acac2d10b05565ee129773c94&pid=1-s2.0-S0306437924000784-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141393476","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Detecting the adversarially-learned injection attacks via knowledge graphs 通过知识图谱检测逆向学习注入攻击
IF 3.7 2区 计算机科学 Q1 Computer Science Pub Date : 2024-06-04 DOI: 10.1016/j.is.2024.102419
Yaojun Hao , Haotian Wang , Qingshan Zhao , Liping Feng , Jian Wang

Over the past two decades, many studies have devoted a good deal of attention to detect injection attacks in recommender systems. However, most of the studies mainly focus on detecting the heuristically-generated injection attacks, which are heuristically fabricated by hand-engineering. In practice, the adversarially-learned injection attacks have been proposed based on optimization methods and enhanced the ability in the camouflage and threat. Under the adversarially-learned injection attacks, the traditional detection models are likely to be fooled. In this paper, a detection method is proposed for the adversarially-learned injection attacks via knowledge graphs. Firstly, with the advantages of wealth information from knowledge graphs, item-pairs on the extension hops of knowledge graphs are regarded as the implicit preferences for users. Also, the item-pair popularity series and user item-pair matrix are constructed to express the user's preferences. Secondly, the word embedding model and principal component analysis are utilized to extract the user's initial vector representations from the item-pair popularity series and item-pair matrix, respectively. Moreover, the Variational Autoencoders with the improved R-drop regularization are used to reconstruct the embedding vectors and further identify the shilling profiles. Finally, the experiments on three real-world datasets indicate that the proposed detector has superior performance than benchmark methods when detecting the adversarially-learned injection attacks. In addition, the detector is evaluated under the heuristically-generated injection attacks and demonstrates the outstanding performance.

在过去的二十年里,许多研究都对检测推荐系统中的注入攻击给予了极大的关注。然而,大多数研究主要集中于检测启发式生成的注入攻击,即通过手工工程启发式地制造注入攻击。在实践中,基于优化方法提出了逆向学习的注入攻击,增强了伪装和威胁能力。在逆向学习注入攻击下,传统的检测模型很可能被骗过。本文提出了一种通过知识图谱对逆向学习注入攻击进行检测的方法。首先,利用知识图谱的财富信息优势,将知识图谱扩展跳数上的项目对视为用户的隐含偏好。同时,构建了项对流行度序列和用户项对矩阵来表达用户的偏好。其次,利用词嵌入模型和主成分分析分别从项对流行度序列和项对矩阵中提取用户的初始向量表示。然后,利用改进的 R-drop 正则化变异自动编码器来重构嵌入向量,并进一步识别 Shilling 配置文件。最后,在三个真实世界数据集上进行的实验表明,在检测逆向学习注入攻击时,所提出的检测器比基准方法具有更优越的性能。此外,该检测器还在启发式生成的注入攻击下进行了评估,并证明了其出色的性能。
{"title":"Detecting the adversarially-learned injection attacks via knowledge graphs","authors":"Yaojun Hao ,&nbsp;Haotian Wang ,&nbsp;Qingshan Zhao ,&nbsp;Liping Feng ,&nbsp;Jian Wang","doi":"10.1016/j.is.2024.102419","DOIUrl":"https://doi.org/10.1016/j.is.2024.102419","url":null,"abstract":"<div><p>Over the past two decades, many studies have devoted a good deal of attention to detect injection attacks in recommender systems. However, most of the studies mainly focus on detecting the heuristically-generated injection attacks, which are heuristically fabricated by hand-engineering. In practice, the adversarially-learned injection attacks have been proposed based on optimization methods and enhanced the ability in the camouflage and threat. Under the adversarially-learned injection attacks, the traditional detection models are likely to be fooled. In this paper, a detection method is proposed for the adversarially-learned injection attacks via knowledge graphs. Firstly, with the advantages of wealth information from knowledge graphs, item-pairs on the extension hops of knowledge graphs are regarded as the implicit preferences for users. Also, the item-pair popularity series and user item-pair matrix are constructed to express the user's preferences. Secondly, the word embedding model and principal component analysis are utilized to extract the user's initial vector representations from the item-pair popularity series and item-pair matrix, respectively. Moreover, the Variational Autoencoders with the improved R-drop regularization are used to reconstruct the embedding vectors and further identify the shilling profiles. Finally, the experiments on three real-world datasets indicate that the proposed detector has superior performance than benchmark methods when detecting the adversarially-learned injection attacks. In addition, the detector is evaluated under the heuristically-generated injection attacks and demonstrates the outstanding performance.</p></div>","PeriodicalId":50363,"journal":{"name":"Information Systems","volume":null,"pages":null},"PeriodicalIF":3.7,"publicationDate":"2024-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141325033","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FDM: Effective and efficient incident detection on sparse trajectory data FDM:对稀疏轨迹数据进行有效和高效的事件检测
IF 3.7 2区 计算机科学 Q1 Computer Science Pub Date : 2024-06-01 DOI: 10.1016/j.is.2024.102418
Xiaolin Han , Tobias Grubenmann , Chenhao Ma , Xiaodong Li , Wenya Sun , Sze Chun Wong , Xuequn Shang , Reynold Cheng

Incident detection (ID), or the automatic discovery of anomalies from road traffic data (e.g., road sensor and GPS data), enables emergency actions (e.g., rescuing injured people) to be carried out in a timely fashion. Existing ID solutions based on data mining or machine learning often rely on dense traffic data; for instance, sensors installed in highways provide frequent updates of road information. In this paper, we ask the question: can ID be performed on sparse traffic data (e.g., location data obtained from GPS devices equipped on vehicles)? As these data may not be enough to describe the state of the roads involved, they can undermine the effectiveness of existing ID solutions. To tackle this challenge, we borrow an important insight from the transportation area, which uses trajectories (i.e., moving histories of vehicles) to derive incident patterns. We study how to obtain incident patterns from trajectories and devise a new solution (called Filter-Discovery-Match (FDM)) to detect anomalies in sparse traffic data. We have also developed a fast algorithm to support FDM. Experiments on a taxi dataset in Hong Kong and a simulated dataset show that FDM is more effective than state-of-the-art ID solutions on sparse traffic data, and is also efficient.

事故检测(ID),即从道路交通数据(如道路传感器和全球定位系统数据)中自动发现异常情况,从而及时采取紧急行动(如抢救伤员)。现有的基于数据挖掘或机器学习的 ID 解决方案通常依赖于密集的交通数据;例如,安装在高速公路上的传感器可提供频繁更新的道路信息。在本文中,我们提出了这样一个问题:ID 能否在稀疏的交通数据(例如从车辆上配备的 GPS 设备获得的位置数据)上执行?由于这些数据可能不足以描述相关道路的状态,因此会削弱现有 ID 解决方案的有效性。为了应对这一挑战,我们借鉴了交通领域的一个重要见解,即利用轨迹(即车辆的移动历史)来推导事故模式。我们研究了如何从轨迹中获取事故模式,并设计了一种新的解决方案(称为 "过滤-发现-匹配"(FDM))来检测稀疏交通数据中的异常情况。我们还开发了一种支持 FDM 的快速算法。在香港出租车数据集和模拟数据集上进行的实验表明,在稀疏交通数据上,FDM 比最先进的 ID 解决方案更有效,而且还很高效。
{"title":"FDM: Effective and efficient incident detection on sparse trajectory data","authors":"Xiaolin Han ,&nbsp;Tobias Grubenmann ,&nbsp;Chenhao Ma ,&nbsp;Xiaodong Li ,&nbsp;Wenya Sun ,&nbsp;Sze Chun Wong ,&nbsp;Xuequn Shang ,&nbsp;Reynold Cheng","doi":"10.1016/j.is.2024.102418","DOIUrl":"10.1016/j.is.2024.102418","url":null,"abstract":"<div><p>Incident detection (ID), or the automatic discovery of anomalies from road traffic data (e.g., road sensor and GPS data), enables emergency actions (e.g., rescuing injured people) to be carried out in a timely fashion. Existing ID solutions based on data mining or machine learning often rely on <em>dense</em> traffic data; for instance, sensors installed in highways provide frequent updates of road information. In this paper, we ask the question: can ID be performed on <em>sparse</em> traffic data (e.g., location data obtained from GPS devices equipped on vehicles)? As these data may not be enough to describe the state of the roads involved, they can undermine the effectiveness of existing ID solutions. To tackle this challenge, we borrow an important insight from the transportation area, which uses trajectories (i.e., moving histories of vehicles) to derive <em>incident patterns</em>. We study how to obtain incident patterns from trajectories and devise a new solution (called <u>F</u>ilter-<u>D</u>iscovery-<u>M</u>atch (<strong>FDM</strong>)) to detect anomalies in sparse traffic data. We have also developed a fast algorithm to support FDM. Experiments on a taxi dataset in Hong Kong and a simulated dataset show that FDM is more effective than state-of-the-art ID solutions on sparse traffic data, and is also efficient.</p></div>","PeriodicalId":50363,"journal":{"name":"Information Systems","volume":null,"pages":null},"PeriodicalIF":3.7,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141278964","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing Entity Resolution with a hybrid Active Machine Learning framework: Strategies for optimal learning in sparse datasets 利用混合主动机器学习框架增强实体解析能力:稀疏数据集中的优化学习策略
IF 3.7 2区 计算机科学 Q1 Computer Science Pub Date : 2024-05-25 DOI: 10.1016/j.is.2024.102410
Mourad Jabrane , Hiba Tabbaa , Aissam Hadri , Imad Hafidi

When solving the problem of identifying similar records in different datasets (known as Entity Resolution or ER), one big challenge is the lack of enough labeled data. Which is crucial for building strong machine learning models, but getting this data can be expensive and time-consuming. Active Machine Learning (ActiveML) is a helpful approach because it cleverly picks the most useful pieces of data to learn from. It uses two main ideas: informativeness and representativeness. Typical ActiveML methods used in ER usually depend too much on just one of these ideas, which can make them less effective, especially when starting with very little data. Our research introduces a new combined method that uses both ideas together. We created two versions of this method, called DPQ and STQ, and tested them on eleven different real-world datasets. The results showed that our new method improves ER by producing better scores, more stable models, and faster learning with less training data compared to existing methods.

在解决识别不同数据集中相似记录的问题时(称为实体解析或 ER),一个很大的挑战是缺乏足够的标注数据。这对建立强大的机器学习模型至关重要,但获取这些数据可能既昂贵又耗时。主动机器学习(ActiveML)是一种有用的方法,因为它能巧妙地挑选出最有用的数据进行学习。它使用两个主要理念:信息性和代表性。ER 中使用的典型 ActiveML 方法通常过于依赖其中一种思想,这可能会降低其有效性,尤其是在数据量很少的情况下。我们的研究引入了一种新的综合方法,同时使用这两种理念。我们创建了这种方法的两个版本,分别称为 DPQ 和 STQ,并在 11 个不同的真实世界数据集上进行了测试。结果表明,与现有方法相比,我们的新方法能产生更好的分数、更稳定的模型,并能以更少的训练数据加快学习速度,从而改进了ER。
{"title":"Enhancing Entity Resolution with a hybrid Active Machine Learning framework: Strategies for optimal learning in sparse datasets","authors":"Mourad Jabrane ,&nbsp;Hiba Tabbaa ,&nbsp;Aissam Hadri ,&nbsp;Imad Hafidi","doi":"10.1016/j.is.2024.102410","DOIUrl":"10.1016/j.is.2024.102410","url":null,"abstract":"<div><p>When solving the problem of identifying similar records in different datasets (known as Entity Resolution or ER), one big challenge is the lack of enough labeled data. Which is crucial for building strong machine learning models, but getting this data can be expensive and time-consuming. Active Machine Learning (ActiveML) is a helpful approach because it cleverly picks the most useful pieces of data to learn from. It uses two main ideas: informativeness and representativeness. Typical ActiveML methods used in ER usually depend too much on just one of these ideas, which can make them less effective, especially when starting with very little data. Our research introduces a new combined method that uses both ideas together. We created two versions of this method, called DPQ and STQ, and tested them on eleven different real-world datasets. The results showed that our new method improves ER by producing better scores, more stable models, and faster learning with less training data compared to existing methods.</p></div>","PeriodicalId":50363,"journal":{"name":"Information Systems","volume":null,"pages":null},"PeriodicalIF":3.7,"publicationDate":"2024-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141188334","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HUM-CARD: A human crowded annotated real dataset HUM-CARD:人类人群注释真实数据集
IF 3.7 2区 计算机科学 Q1 Computer Science Pub Date : 2024-05-21 DOI: 10.1016/j.is.2024.102409
Giovanni Di Gennaro , Claudia Greco , Amedeo Buonanno , Marialucia Cuciniello , Terry Amorese , Maria Santina Ler , Gennaro Cordasco , Francesco A.N. Palmieri , Anna Esposito

The growth of data-driven approaches typical of Machine Learning leads to an ever-increasing need for large quantities of labeled data. Unfortunately, these attributions are often made automatically and/or crudely, thus destroying the very concept of “ground truth” they are supposed to represent. To address this problem, we introduce HUM-CARD, a dataset of human trajectories in crowded contexts manually annotated by nine experts in engineering and psychology, totaling approximately 5000 hours. Our multidisciplinary labeling process has enabled the creation of a well-structured ontology, accounting for both individual and contextual factors influencing human movement dynamics in shared environments. Preliminary and descriptive analyzes are presented, highlighting the potential benefits of this dataset and its methodology in various research challenges.

机器学习中典型的数据驱动方法的发展导致对大量标注数据的需求与日俱增。遗憾的是,这些归因往往是自动和/或粗略地进行的,从而破坏了它们本应代表的 "基本事实 "的概念。为了解决这个问题,我们推出了 HUM-CARD,这是一个由九位工程学和心理学专家人工标注的数据集,包含了人类在拥挤环境中的活动轨迹,总时长约 5000 小时。我们的多学科标注过程创建了一个结构合理的本体,既考虑了影响人类在共享环境中运动动态的个体因素,也考虑了环境因素。本文介绍了初步的描述性分析,强调了该数据集及其方法在各种研究挑战中的潜在优势。
{"title":"HUM-CARD: A human crowded annotated real dataset","authors":"Giovanni Di Gennaro ,&nbsp;Claudia Greco ,&nbsp;Amedeo Buonanno ,&nbsp;Marialucia Cuciniello ,&nbsp;Terry Amorese ,&nbsp;Maria Santina Ler ,&nbsp;Gennaro Cordasco ,&nbsp;Francesco A.N. Palmieri ,&nbsp;Anna Esposito","doi":"10.1016/j.is.2024.102409","DOIUrl":"10.1016/j.is.2024.102409","url":null,"abstract":"<div><p>The growth of data-driven approaches typical of Machine Learning leads to an ever-increasing need for large quantities of labeled data. Unfortunately, these attributions are often made automatically and/or crudely, thus destroying the very concept of “ground truth” they are supposed to represent. To address this problem, we introduce HUM-CARD, a dataset of human trajectories in crowded contexts manually annotated by nine experts in engineering and psychology, totaling approximately <span><math><mrow><mn>5000</mn></mrow></math></span> hours. Our multidisciplinary labeling process has enabled the creation of a well-structured ontology, accounting for both individual and contextual factors influencing human movement dynamics in shared environments. Preliminary and descriptive analyzes are presented, highlighting the potential benefits of this dataset and its methodology in various research challenges.</p></div>","PeriodicalId":50363,"journal":{"name":"Information Systems","volume":null,"pages":null},"PeriodicalIF":3.7,"publicationDate":"2024-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S030643792400067X/pdfft?md5=e81bccaabf431209b490556bb4e67c4b&pid=1-s2.0-S030643792400067X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141138482","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Heart failure prognosis prediction: Let’s start with the MDL-HFP model 心力衰竭预后预测 :让我们从 MDL-HFP 模型开始
IF 3.7 2区 计算机科学 Q1 Computer Science Pub Date : 2024-05-21 DOI: 10.1016/j.is.2024.102408
Huiting Ma , Dengao Li , Jian Fu , Guiji Zhao , Jumin Zhao

Heart failure, as a critical symptom or terminal stage of assorted heart diseases, is a world-class public health problem. Establishing a prognostic model can help identify high dangerous patients, save their lives promptly, and reduce medical burden. Although integrating structured indicators and unstructured text for complementary information has been proven effective in disease prediction tasks, there are still certain limitations. Firstly, the processing of single branch modes is easily overlooked, which can affect the final fusion result. Secondly, simple fusion will lose complementary information between modalities, limiting the network’s learning ability. Thirdly, incomplete interpretability can affect the practical application and development of the model. To overcome these challenges, this paper proposes the MDL-HFP multimodal model for predicting patient prognosis using the MIMIC-III public database. Firstly, the ADASYN algorithm is used to handle the imbalance of data categories. Then, the proposed improved Deep&Cross Network is used for automatic feature selection to encode structured sparse features, and implicit graph structure information is introduced to encode unstructured clinical notes based on the HR-BGCN model. Finally, the information of the two modalities is fused through a cross-modal dynamic interaction layer. By comparing multiple advanced multimodal deep learning models, the model’s effectiveness is verified, with an average F1 score of 90.42% and an average accuracy of 90.70%. The model proposed in this paper can accurately classify the readmission status of patients, thereby assisting doctors in making judgments and improving the patient’s prognosis. Further visual analysis demonstrates the usability of the model, providing a comprehensive explanation for clinical decision-making.

心力衰竭是各种心脏病的重要症状或终末阶段,是世界级的公共卫生问题。建立预后模型有助于识别高危患者,及时挽救他们的生命,减轻医疗负担。虽然将结构化指标和非结构化文本进行信息互补已被证明在疾病预测任务中行之有效,但仍存在一定的局限性。首先,单一分支模式的处理容易被忽视,从而影响最终的融合结果。其次,简单的融合会丢失模态间的互补信息,限制网络的学习能力。第三,不完整的可解释性会影响模型的实际应用和发展。为了克服这些难题,本文提出了利用 MIMIC-III 公共数据库预测患者预后的 MDL-HFP 多模态模型。首先,使用 ADASYN 算法处理数据类别的不平衡。然后,利用改进的 Deep&Cross 网络进行自动特征选择,对结构稀疏的特征进行编码,并在 HR-BGCN 模型的基础上引入隐式图结构信息,对非结构化的临床笔记进行编码。最后,通过跨模态动态交互层融合两种模态的信息。通过比较多个先进的多模态深度学习模型,验证了该模型的有效性,其平均 F1 得分为 90.42%,平均准确率为 90.70%。本文提出的模型可以准确地对患者的再入院状态进行分类,从而帮助医生做出判断,改善患者的预后。进一步的可视化分析表明了模型的可用性,为临床决策提供了全面的解释。
{"title":"Heart failure prognosis prediction: Let’s start with the MDL-HFP model","authors":"Huiting Ma ,&nbsp;Dengao Li ,&nbsp;Jian Fu ,&nbsp;Guiji Zhao ,&nbsp;Jumin Zhao","doi":"10.1016/j.is.2024.102408","DOIUrl":"10.1016/j.is.2024.102408","url":null,"abstract":"<div><p>Heart failure, as a critical symptom or terminal stage of assorted heart diseases, is a world-class public health problem. Establishing a prognostic model can help identify high dangerous patients, save their lives promptly, and reduce medical burden. Although integrating structured indicators and unstructured text for complementary information has been proven effective in disease prediction tasks, there are still certain limitations. Firstly, the processing of single branch modes is easily overlooked, which can affect the final fusion result. Secondly, simple fusion will lose complementary information between modalities, limiting the network’s learning ability. Thirdly, incomplete interpretability can affect the practical application and development of the model. To overcome these challenges, this paper proposes the MDL-HFP multimodal model for predicting patient prognosis using the MIMIC-III public database. Firstly, the ADASYN algorithm is used to handle the imbalance of data categories. Then, the proposed improved Deep&amp;Cross Network is used for automatic feature selection to encode structured sparse features, and implicit graph structure information is introduced to encode unstructured clinical notes based on the HR-BGCN model. Finally, the information of the two modalities is fused through a cross-modal dynamic interaction layer. By comparing multiple advanced multimodal deep learning models, the model’s effectiveness is verified, with an average F1 score of 90.42% and an average accuracy of 90.70%. The model proposed in this paper can accurately classify the readmission status of patients, thereby assisting doctors in making judgments and improving the patient’s prognosis. Further visual analysis demonstrates the usability of the model, providing a comprehensive explanation for clinical decision-making.</p></div>","PeriodicalId":50363,"journal":{"name":"Information Systems","volume":null,"pages":null},"PeriodicalIF":3.7,"publicationDate":"2024-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141137614","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GAMA: A multi-graph-based anomaly detection framework for business processes via graph neural networks GAMA:基于图神经网络的业务流程多图异常检测框架
IF 3.7 2区 计算机科学 Q1 Computer Science Pub Date : 2024-05-19 DOI: 10.1016/j.is.2024.102405
Wei Guan, Jian Cao, Yang Gu, Shiyou Qian

Anomalies in business processes are inevitable for various reasons such as system failures and operator errors. Detecting anomalies is important for the management and optimization of business processes. However, prevailing anomaly detection approaches often fail to capture crucial structural information about the underlying process. To address this, we propose a multi-Graph based Anomaly detection fraMework for business processes via grAph neural networks, named GAMA. GAMA makes use of structural process information and attribute information in a more integrated way. In GAMA, multiple graphs are applied to model a trace in which each attribute is modeled as a separate graph. In particular, the graph constructed for the special attribute activity reflects the control flow. Then GAMA employs a multi-graph encoder and a multi-sequence decoder on multiple graphs to detect anomalies in terms of the reconstruction errors. Moreover, three teacher forcing styles are designed to enhance GAMA’s ability to reconstruct normal behaviors and thus improve detection performance. We conduct extensive experiments on both synthetic logs and real-life logs. The experiment results demonstrate that GAMA outperforms state-of-the-art methods for both trace-level and attribute-level anomaly detection.

由于系统故障和操作员失误等各种原因,业务流程中出现异常是不可避免的。检测异常对于管理和优化业务流程非常重要。然而,现有的异常检测方法往往无法捕捉到底层流程的关键结构信息。为了解决这个问题,我们提出了一种通过 grAph 神经网络进行业务流程多图异常检测的方法,命名为 GAMA。GAMA 以更综合的方式利用结构性流程信息和属性信息。在 GAMA 中,多个图被应用于跟踪建模,其中每个属性都作为一个单独的图建模。特别是,为特殊属性活动构建的图反映了控制流。然后,GAMA 在多个图上使用多图编码器和多序列解码器来检测重建错误方面的异常。此外,我们还设计了三种教师强制风格,以增强 GAMA 重构正常行为的能力,从而提高检测性能。我们在合成日志和真实日志上进行了大量实验。实验结果表明,在轨迹级和属性级异常检测方面,GAMA 都优于最先进的方法。
{"title":"GAMA: A multi-graph-based anomaly detection framework for business processes via graph neural networks","authors":"Wei Guan,&nbsp;Jian Cao,&nbsp;Yang Gu,&nbsp;Shiyou Qian","doi":"10.1016/j.is.2024.102405","DOIUrl":"https://doi.org/10.1016/j.is.2024.102405","url":null,"abstract":"<div><p>Anomalies in business processes are inevitable for various reasons such as system failures and operator errors. Detecting anomalies is important for the management and optimization of business processes. However, prevailing anomaly detection approaches often fail to capture crucial structural information about the underlying process. To address this, we propose a multi-Graph based Anomaly detection fraMework for business processes via grAph neural networks, named GAMA. GAMA makes use of structural process information and attribute information in a more integrated way. In GAMA, multiple graphs are applied to model a trace in which each attribute is modeled as a separate graph. In particular, the graph constructed for the special attribute <em>activity</em> reflects the control flow. Then GAMA employs a multi-graph encoder and a multi-sequence decoder on multiple graphs to detect anomalies in terms of the reconstruction errors. Moreover, three teacher forcing styles are designed to enhance GAMA’s ability to reconstruct normal behaviors and thus improve detection performance. We conduct extensive experiments on both synthetic logs and real-life logs. The experiment results demonstrate that GAMA outperforms state-of-the-art methods for both trace-level and attribute-level anomaly detection.</p></div>","PeriodicalId":50363,"journal":{"name":"Information Systems","volume":null,"pages":null},"PeriodicalIF":3.7,"publicationDate":"2024-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141083465","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Information Systems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1