首页 > 最新文献

Information Systems最新文献

英文 中文
An efficient approach for discovering Graph Entity Dependencies (GEDs) 发现图实体依赖关系(GED)的高效方法
IF 3 2区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-06-28 DOI: 10.1016/j.is.2024.102421
Dehua Liu , Selasi Kwashie , Yidi Zhang , Guangtong Zhou , Michael Bewong , Xiaoying Wu , Xi Guo , Keqing He , Zaiwen Feng

Graph entity dependencies (GEDs) are novel graph constraints, unifying keys and functional dependencies, for property graphs. They have been found useful in many real-world data quality and data management tasks, including fact checking on social media networks and entity resolution. In this paper, we study the discovery problem of GEDs—finding a minimal cover of valid GEDs in a given graph data. We formalise the problem, and propose an effective and efficient approach to overcome major bottlenecks in GED discovery. In particular, we leverage existing graph partitioning algorithms to enable fast GED-scope discovery, and employ effective pruning strategies over the prohibitively large space of candidate dependencies. Furthermore, we define an interestingness measure for GEDs based on the minimum description length principle, to score and rank the mined cover set of GEDs. Finally, we demonstrate the scalability and effectiveness of our GED discovery approach through extensive experiments on real-world benchmark graph data sets; and present the usefulness of the discovered rules in different downstream data quality management applications.

图实体依赖性(GED)是一种新颖的图约束,它统一了属性图的键和功能依赖性。它们在许多现实世界的数据质量和数据管理任务中都很有用,包括社交媒体网络的事实检查和实体解析。在本文中,我们研究了 GED 的发现问题--在给定的图数据中找到有效 GED 的最小覆盖。我们将该问题形式化,并提出了一种有效且高效的方法来克服 GED 发现中的主要瓶颈。特别是,我们利用现有的图分割算法来实现快速的 GED 范围发现,并在令人望而却步的庞大候选依赖空间中采用有效的剪枝策略。此外,我们还根据最小描述长度原则定义了 GED 的趣味性度量,以便对挖掘出的 GED 覆盖集进行评分和排序。最后,我们通过在真实世界基准图数据集上进行大量实验,证明了我们的 GED 发现方法的可扩展性和有效性;并介绍了所发现的规则在不同下游数据质量管理应用中的实用性。
{"title":"An efficient approach for discovering Graph Entity Dependencies (GEDs)","authors":"Dehua Liu ,&nbsp;Selasi Kwashie ,&nbsp;Yidi Zhang ,&nbsp;Guangtong Zhou ,&nbsp;Michael Bewong ,&nbsp;Xiaoying Wu ,&nbsp;Xi Guo ,&nbsp;Keqing He ,&nbsp;Zaiwen Feng","doi":"10.1016/j.is.2024.102421","DOIUrl":"https://doi.org/10.1016/j.is.2024.102421","url":null,"abstract":"<div><p>Graph entity dependencies (GEDs) are novel graph constraints, unifying keys and functional dependencies, for property graphs. They have been found useful in many real-world data quality and data management tasks, including fact checking on social media networks and entity resolution. In this paper, we study the discovery problem of GEDs—finding a minimal cover of valid GEDs in a given graph data. We formalise the problem, and propose an effective and efficient approach to overcome major bottlenecks in GED discovery. In particular, we leverage existing graph partitioning algorithms to enable fast GED-scope discovery, and employ effective pruning strategies over the prohibitively large space of candidate dependencies. Furthermore, we define an interestingness measure for GEDs based on the minimum description length principle, to score and rank the mined cover set of GEDs. Finally, we demonstrate the scalability and effectiveness of our GED discovery approach through extensive experiments on real-world benchmark graph data sets; and present the usefulness of the discovered rules in different downstream data quality management applications.</p></div>","PeriodicalId":50363,"journal":{"name":"Information Systems","volume":"125 ","pages":"Article 102421"},"PeriodicalIF":3.0,"publicationDate":"2024-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0306437924000796/pdfft?md5=8af2f9051185a5f57df5320cb4c1b7bd&pid=1-s2.0-S0306437924000796-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141583109","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Analyzing workload trends for boosting triple stores performance 分析工作负载趋势,提高三重存储性能
IF 3 2区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-06-10 DOI: 10.1016/j.is.2024.102420
Ahmed Al-Ghezi, Lena Wiese

The Resource Description Framework (RDF) is widely used to model web data. The scale and complexity of the modeled data emphasized performance challenges on the RDF-triple stores. Workload adaption is one important strategy to deal with those challenges on the storage level. Current workload-adaption approaches lack the necessary generalization of the problem and only optimize part of the storage layer with the workload (mostly the replication). This creates a big performance gap within other data structures (e.g. indexes and cache) that could heavily benefit from the same workload adaption strategy. Moreover, the workload statistics are built collectively in most of the current approaches. Thus, the analysis process is unaware of whether workloads’ items are old or recent. However, that does not simulate the temporal trends that exist naturally in user queries which causes the analysis process to lag behind the rapid workload development. We present a novel universal adaption approach to the storage management of a distributed RDF store. The system aims to find optimal data assignments to the different indexes, replications, and join cache within the limited storage space. We present a cost model based on the workload that often contains frequent patterns. The workload is dynamically and continuously analyzed to evaluate predefined rules considering the benefits and costs of all options of assigning data to the storage structures. The objective is to reduce query execution time by letting different data containers compete on the limited storage space. By modeling the workload statistics as time series, we can apply well-known smoothing techniques allowing the importance of the workload to decay over time. That allows the universal adaption to stay tuned with potential changes in the workload trends.

资源描述框架(RDF)被广泛用于网络数据建模。建模数据的规模和复杂性给 RDF 三重存储带来了性能挑战。工作负载自适应是在存储层面应对这些挑战的重要策略之一。目前的工作负载适应方法缺乏对问题的必要概括,只能根据工作负载优化存储层的一部分(主要是复制)。这就在其他数据结构(如索引和高速缓存)中造成了巨大的性能差距,而这些数据结构可以从相同的工作负载适应策略中获益良多。此外,在当前的大多数方法中,工作负载统计数据都是集体建立的。因此,分析过程不知道工作负载的项目是新的还是旧的。然而,这并不能模拟用户查询中自然存在的时间趋势,从而导致分析流程落后于工作负载的快速发展。我们为分布式 RDF 存储的存储管理提出了一种新颖的通用适应方法。该系统的目标是在有限的存储空间内为不同的索引、复制和连接缓存找到最佳的数据分配。我们根据经常包含频繁模式的工作负载提出了一种成本模型。对工作负载进行动态和持续的分析,以评估预先定义的规则,同时考虑将数据分配到存储结构的所有选项的优势和成本。其目的是通过让不同的数据容器在有限的存储空间上竞争来缩短查询执行时间。通过将工作负载统计数据建模为时间序列,我们可以应用著名的平滑技术,使工作负载的重要性随时间衰减。这样,通用自适应就能根据工作负载趋势的潜在变化进行调整。
{"title":"Analyzing workload trends for boosting triple stores performance","authors":"Ahmed Al-Ghezi,&nbsp;Lena Wiese","doi":"10.1016/j.is.2024.102420","DOIUrl":"10.1016/j.is.2024.102420","url":null,"abstract":"<div><p>The Resource Description Framework (RDF) is widely used to model web data. The scale and complexity of the modeled data emphasized performance challenges on the RDF-triple stores. Workload adaption is one important strategy to deal with those challenges on the storage level. Current workload-adaption approaches lack the necessary generalization of the problem and only optimize part of the storage layer with the workload (mostly the replication). This creates a big performance gap within other data structures (e.g. indexes and cache) that could heavily benefit from the same workload adaption strategy. Moreover, the workload statistics are built collectively in most of the current approaches. Thus, the analysis process is unaware of whether workloads’ items are old or recent. However, that does not simulate the temporal trends that exist naturally in user queries which causes the analysis process to lag behind the rapid workload development. We present a novel universal adaption approach to the storage management of a distributed RDF store. The system aims to find optimal data assignments to the different indexes, replications, and join cache within the limited storage space. We present a cost model based on the workload that often contains frequent patterns. The workload is dynamically and continuously analyzed to evaluate predefined rules considering the benefits and costs of all options of assigning data to the storage structures. The objective is to reduce query execution time by letting different data containers compete on the limited storage space. By modeling the workload statistics as time series, we can apply well-known smoothing techniques allowing the importance of the workload to decay over time. That allows the universal adaption to stay tuned with potential changes in the workload trends.</p></div>","PeriodicalId":50363,"journal":{"name":"Information Systems","volume":"125 ","pages":"Article 102420"},"PeriodicalIF":3.0,"publicationDate":"2024-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0306437924000784/pdfft?md5=4a9d8f0acac2d10b05565ee129773c94&pid=1-s2.0-S0306437924000784-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141393476","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Detecting the adversarially-learned injection attacks via knowledge graphs 通过知识图谱检测逆向学习注入攻击
IF 3.7 2区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-06-04 DOI: 10.1016/j.is.2024.102419
Yaojun Hao , Haotian Wang , Qingshan Zhao , Liping Feng , Jian Wang

Over the past two decades, many studies have devoted a good deal of attention to detect injection attacks in recommender systems. However, most of the studies mainly focus on detecting the heuristically-generated injection attacks, which are heuristically fabricated by hand-engineering. In practice, the adversarially-learned injection attacks have been proposed based on optimization methods and enhanced the ability in the camouflage and threat. Under the adversarially-learned injection attacks, the traditional detection models are likely to be fooled. In this paper, a detection method is proposed for the adversarially-learned injection attacks via knowledge graphs. Firstly, with the advantages of wealth information from knowledge graphs, item-pairs on the extension hops of knowledge graphs are regarded as the implicit preferences for users. Also, the item-pair popularity series and user item-pair matrix are constructed to express the user's preferences. Secondly, the word embedding model and principal component analysis are utilized to extract the user's initial vector representations from the item-pair popularity series and item-pair matrix, respectively. Moreover, the Variational Autoencoders with the improved R-drop regularization are used to reconstruct the embedding vectors and further identify the shilling profiles. Finally, the experiments on three real-world datasets indicate that the proposed detector has superior performance than benchmark methods when detecting the adversarially-learned injection attacks. In addition, the detector is evaluated under the heuristically-generated injection attacks and demonstrates the outstanding performance.

在过去的二十年里,许多研究都对检测推荐系统中的注入攻击给予了极大的关注。然而,大多数研究主要集中于检测启发式生成的注入攻击,即通过手工工程启发式地制造注入攻击。在实践中,基于优化方法提出了逆向学习的注入攻击,增强了伪装和威胁能力。在逆向学习注入攻击下,传统的检测模型很可能被骗过。本文提出了一种通过知识图谱对逆向学习注入攻击进行检测的方法。首先,利用知识图谱的财富信息优势,将知识图谱扩展跳数上的项目对视为用户的隐含偏好。同时,构建了项对流行度序列和用户项对矩阵来表达用户的偏好。其次,利用词嵌入模型和主成分分析分别从项对流行度序列和项对矩阵中提取用户的初始向量表示。然后,利用改进的 R-drop 正则化变异自动编码器来重构嵌入向量,并进一步识别 Shilling 配置文件。最后,在三个真实世界数据集上进行的实验表明,在检测逆向学习注入攻击时,所提出的检测器比基准方法具有更优越的性能。此外,该检测器还在启发式生成的注入攻击下进行了评估,并证明了其出色的性能。
{"title":"Detecting the adversarially-learned injection attacks via knowledge graphs","authors":"Yaojun Hao ,&nbsp;Haotian Wang ,&nbsp;Qingshan Zhao ,&nbsp;Liping Feng ,&nbsp;Jian Wang","doi":"10.1016/j.is.2024.102419","DOIUrl":"https://doi.org/10.1016/j.is.2024.102419","url":null,"abstract":"<div><p>Over the past two decades, many studies have devoted a good deal of attention to detect injection attacks in recommender systems. However, most of the studies mainly focus on detecting the heuristically-generated injection attacks, which are heuristically fabricated by hand-engineering. In practice, the adversarially-learned injection attacks have been proposed based on optimization methods and enhanced the ability in the camouflage and threat. Under the adversarially-learned injection attacks, the traditional detection models are likely to be fooled. In this paper, a detection method is proposed for the adversarially-learned injection attacks via knowledge graphs. Firstly, with the advantages of wealth information from knowledge graphs, item-pairs on the extension hops of knowledge graphs are regarded as the implicit preferences for users. Also, the item-pair popularity series and user item-pair matrix are constructed to express the user's preferences. Secondly, the word embedding model and principal component analysis are utilized to extract the user's initial vector representations from the item-pair popularity series and item-pair matrix, respectively. Moreover, the Variational Autoencoders with the improved R-drop regularization are used to reconstruct the embedding vectors and further identify the shilling profiles. Finally, the experiments on three real-world datasets indicate that the proposed detector has superior performance than benchmark methods when detecting the adversarially-learned injection attacks. In addition, the detector is evaluated under the heuristically-generated injection attacks and demonstrates the outstanding performance.</p></div>","PeriodicalId":50363,"journal":{"name":"Information Systems","volume":"125 ","pages":"Article 102419"},"PeriodicalIF":3.7,"publicationDate":"2024-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141325033","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FDM: Effective and efficient incident detection on sparse trajectory data FDM:对稀疏轨迹数据进行有效和高效的事件检测
IF 3.7 2区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-06-01 DOI: 10.1016/j.is.2024.102418
Xiaolin Han , Tobias Grubenmann , Chenhao Ma , Xiaodong Li , Wenya Sun , Sze Chun Wong , Xuequn Shang , Reynold Cheng

Incident detection (ID), or the automatic discovery of anomalies from road traffic data (e.g., road sensor and GPS data), enables emergency actions (e.g., rescuing injured people) to be carried out in a timely fashion. Existing ID solutions based on data mining or machine learning often rely on dense traffic data; for instance, sensors installed in highways provide frequent updates of road information. In this paper, we ask the question: can ID be performed on sparse traffic data (e.g., location data obtained from GPS devices equipped on vehicles)? As these data may not be enough to describe the state of the roads involved, they can undermine the effectiveness of existing ID solutions. To tackle this challenge, we borrow an important insight from the transportation area, which uses trajectories (i.e., moving histories of vehicles) to derive incident patterns. We study how to obtain incident patterns from trajectories and devise a new solution (called Filter-Discovery-Match (FDM)) to detect anomalies in sparse traffic data. We have also developed a fast algorithm to support FDM. Experiments on a taxi dataset in Hong Kong and a simulated dataset show that FDM is more effective than state-of-the-art ID solutions on sparse traffic data, and is also efficient.

事故检测(ID),即从道路交通数据(如道路传感器和全球定位系统数据)中自动发现异常情况,从而及时采取紧急行动(如抢救伤员)。现有的基于数据挖掘或机器学习的 ID 解决方案通常依赖于密集的交通数据;例如,安装在高速公路上的传感器可提供频繁更新的道路信息。在本文中,我们提出了这样一个问题:ID 能否在稀疏的交通数据(例如从车辆上配备的 GPS 设备获得的位置数据)上执行?由于这些数据可能不足以描述相关道路的状态,因此会削弱现有 ID 解决方案的有效性。为了应对这一挑战,我们借鉴了交通领域的一个重要见解,即利用轨迹(即车辆的移动历史)来推导事故模式。我们研究了如何从轨迹中获取事故模式,并设计了一种新的解决方案(称为 "过滤-发现-匹配"(FDM))来检测稀疏交通数据中的异常情况。我们还开发了一种支持 FDM 的快速算法。在香港出租车数据集和模拟数据集上进行的实验表明,在稀疏交通数据上,FDM 比最先进的 ID 解决方案更有效,而且还很高效。
{"title":"FDM: Effective and efficient incident detection on sparse trajectory data","authors":"Xiaolin Han ,&nbsp;Tobias Grubenmann ,&nbsp;Chenhao Ma ,&nbsp;Xiaodong Li ,&nbsp;Wenya Sun ,&nbsp;Sze Chun Wong ,&nbsp;Xuequn Shang ,&nbsp;Reynold Cheng","doi":"10.1016/j.is.2024.102418","DOIUrl":"10.1016/j.is.2024.102418","url":null,"abstract":"<div><p>Incident detection (ID), or the automatic discovery of anomalies from road traffic data (e.g., road sensor and GPS data), enables emergency actions (e.g., rescuing injured people) to be carried out in a timely fashion. Existing ID solutions based on data mining or machine learning often rely on <em>dense</em> traffic data; for instance, sensors installed in highways provide frequent updates of road information. In this paper, we ask the question: can ID be performed on <em>sparse</em> traffic data (e.g., location data obtained from GPS devices equipped on vehicles)? As these data may not be enough to describe the state of the roads involved, they can undermine the effectiveness of existing ID solutions. To tackle this challenge, we borrow an important insight from the transportation area, which uses trajectories (i.e., moving histories of vehicles) to derive <em>incident patterns</em>. We study how to obtain incident patterns from trajectories and devise a new solution (called <u>F</u>ilter-<u>D</u>iscovery-<u>M</u>atch (<strong>FDM</strong>)) to detect anomalies in sparse traffic data. We have also developed a fast algorithm to support FDM. Experiments on a taxi dataset in Hong Kong and a simulated dataset show that FDM is more effective than state-of-the-art ID solutions on sparse traffic data, and is also efficient.</p></div>","PeriodicalId":50363,"journal":{"name":"Information Systems","volume":"125 ","pages":"Article 102418"},"PeriodicalIF":3.7,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141278964","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing Entity Resolution with a hybrid Active Machine Learning framework: Strategies for optimal learning in sparse datasets 利用混合主动机器学习框架增强实体解析能力:稀疏数据集中的优化学习策略
IF 3.7 2区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-05-25 DOI: 10.1016/j.is.2024.102410
Mourad Jabrane , Hiba Tabbaa , Aissam Hadri , Imad Hafidi

When solving the problem of identifying similar records in different datasets (known as Entity Resolution or ER), one big challenge is the lack of enough labeled data. Which is crucial for building strong machine learning models, but getting this data can be expensive and time-consuming. Active Machine Learning (ActiveML) is a helpful approach because it cleverly picks the most useful pieces of data to learn from. It uses two main ideas: informativeness and representativeness. Typical ActiveML methods used in ER usually depend too much on just one of these ideas, which can make them less effective, especially when starting with very little data. Our research introduces a new combined method that uses both ideas together. We created two versions of this method, called DPQ and STQ, and tested them on eleven different real-world datasets. The results showed that our new method improves ER by producing better scores, more stable models, and faster learning with less training data compared to existing methods.

在解决识别不同数据集中相似记录的问题时(称为实体解析或 ER),一个很大的挑战是缺乏足够的标注数据。这对建立强大的机器学习模型至关重要,但获取这些数据可能既昂贵又耗时。主动机器学习(ActiveML)是一种有用的方法,因为它能巧妙地挑选出最有用的数据进行学习。它使用两个主要理念:信息性和代表性。ER 中使用的典型 ActiveML 方法通常过于依赖其中一种思想,这可能会降低其有效性,尤其是在数据量很少的情况下。我们的研究引入了一种新的综合方法,同时使用这两种理念。我们创建了这种方法的两个版本,分别称为 DPQ 和 STQ,并在 11 个不同的真实世界数据集上进行了测试。结果表明,与现有方法相比,我们的新方法能产生更好的分数、更稳定的模型,并能以更少的训练数据加快学习速度,从而改进了ER。
{"title":"Enhancing Entity Resolution with a hybrid Active Machine Learning framework: Strategies for optimal learning in sparse datasets","authors":"Mourad Jabrane ,&nbsp;Hiba Tabbaa ,&nbsp;Aissam Hadri ,&nbsp;Imad Hafidi","doi":"10.1016/j.is.2024.102410","DOIUrl":"10.1016/j.is.2024.102410","url":null,"abstract":"<div><p>When solving the problem of identifying similar records in different datasets (known as Entity Resolution or ER), one big challenge is the lack of enough labeled data. Which is crucial for building strong machine learning models, but getting this data can be expensive and time-consuming. Active Machine Learning (ActiveML) is a helpful approach because it cleverly picks the most useful pieces of data to learn from. It uses two main ideas: informativeness and representativeness. Typical ActiveML methods used in ER usually depend too much on just one of these ideas, which can make them less effective, especially when starting with very little data. Our research introduces a new combined method that uses both ideas together. We created two versions of this method, called DPQ and STQ, and tested them on eleven different real-world datasets. The results showed that our new method improves ER by producing better scores, more stable models, and faster learning with less training data compared to existing methods.</p></div>","PeriodicalId":50363,"journal":{"name":"Information Systems","volume":"125 ","pages":"Article 102410"},"PeriodicalIF":3.7,"publicationDate":"2024-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141188334","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HUM-CARD: A human crowded annotated real dataset HUM-CARD:人类人群注释真实数据集
IF 3.7 2区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-05-21 DOI: 10.1016/j.is.2024.102409
Giovanni Di Gennaro , Claudia Greco , Amedeo Buonanno , Marialucia Cuciniello , Terry Amorese , Maria Santina Ler , Gennaro Cordasco , Francesco A.N. Palmieri , Anna Esposito

The growth of data-driven approaches typical of Machine Learning leads to an ever-increasing need for large quantities of labeled data. Unfortunately, these attributions are often made automatically and/or crudely, thus destroying the very concept of “ground truth” they are supposed to represent. To address this problem, we introduce HUM-CARD, a dataset of human trajectories in crowded contexts manually annotated by nine experts in engineering and psychology, totaling approximately 5000 hours. Our multidisciplinary labeling process has enabled the creation of a well-structured ontology, accounting for both individual and contextual factors influencing human movement dynamics in shared environments. Preliminary and descriptive analyzes are presented, highlighting the potential benefits of this dataset and its methodology in various research challenges.

机器学习中典型的数据驱动方法的发展导致对大量标注数据的需求与日俱增。遗憾的是,这些归因往往是自动和/或粗略地进行的,从而破坏了它们本应代表的 "基本事实 "的概念。为了解决这个问题,我们推出了 HUM-CARD,这是一个由九位工程学和心理学专家人工标注的数据集,包含了人类在拥挤环境中的活动轨迹,总时长约 5000 小时。我们的多学科标注过程创建了一个结构合理的本体,既考虑了影响人类在共享环境中运动动态的个体因素,也考虑了环境因素。本文介绍了初步的描述性分析,强调了该数据集及其方法在各种研究挑战中的潜在优势。
{"title":"HUM-CARD: A human crowded annotated real dataset","authors":"Giovanni Di Gennaro ,&nbsp;Claudia Greco ,&nbsp;Amedeo Buonanno ,&nbsp;Marialucia Cuciniello ,&nbsp;Terry Amorese ,&nbsp;Maria Santina Ler ,&nbsp;Gennaro Cordasco ,&nbsp;Francesco A.N. Palmieri ,&nbsp;Anna Esposito","doi":"10.1016/j.is.2024.102409","DOIUrl":"10.1016/j.is.2024.102409","url":null,"abstract":"<div><p>The growth of data-driven approaches typical of Machine Learning leads to an ever-increasing need for large quantities of labeled data. Unfortunately, these attributions are often made automatically and/or crudely, thus destroying the very concept of “ground truth” they are supposed to represent. To address this problem, we introduce HUM-CARD, a dataset of human trajectories in crowded contexts manually annotated by nine experts in engineering and psychology, totaling approximately <span><math><mrow><mn>5000</mn></mrow></math></span> hours. Our multidisciplinary labeling process has enabled the creation of a well-structured ontology, accounting for both individual and contextual factors influencing human movement dynamics in shared environments. Preliminary and descriptive analyzes are presented, highlighting the potential benefits of this dataset and its methodology in various research challenges.</p></div>","PeriodicalId":50363,"journal":{"name":"Information Systems","volume":"124 ","pages":"Article 102409"},"PeriodicalIF":3.7,"publicationDate":"2024-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S030643792400067X/pdfft?md5=e81bccaabf431209b490556bb4e67c4b&pid=1-s2.0-S030643792400067X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141138482","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Heart failure prognosis prediction: Let’s start with the MDL-HFP model 心力衰竭预后预测 :让我们从 MDL-HFP 模型开始
IF 3.7 2区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-05-21 DOI: 10.1016/j.is.2024.102408
Huiting Ma , Dengao Li , Jian Fu , Guiji Zhao , Jumin Zhao

Heart failure, as a critical symptom or terminal stage of assorted heart diseases, is a world-class public health problem. Establishing a prognostic model can help identify high dangerous patients, save their lives promptly, and reduce medical burden. Although integrating structured indicators and unstructured text for complementary information has been proven effective in disease prediction tasks, there are still certain limitations. Firstly, the processing of single branch modes is easily overlooked, which can affect the final fusion result. Secondly, simple fusion will lose complementary information between modalities, limiting the network’s learning ability. Thirdly, incomplete interpretability can affect the practical application and development of the model. To overcome these challenges, this paper proposes the MDL-HFP multimodal model for predicting patient prognosis using the MIMIC-III public database. Firstly, the ADASYN algorithm is used to handle the imbalance of data categories. Then, the proposed improved Deep&Cross Network is used for automatic feature selection to encode structured sparse features, and implicit graph structure information is introduced to encode unstructured clinical notes based on the HR-BGCN model. Finally, the information of the two modalities is fused through a cross-modal dynamic interaction layer. By comparing multiple advanced multimodal deep learning models, the model’s effectiveness is verified, with an average F1 score of 90.42% and an average accuracy of 90.70%. The model proposed in this paper can accurately classify the readmission status of patients, thereby assisting doctors in making judgments and improving the patient’s prognosis. Further visual analysis demonstrates the usability of the model, providing a comprehensive explanation for clinical decision-making.

心力衰竭是各种心脏病的重要症状或终末阶段,是世界级的公共卫生问题。建立预后模型有助于识别高危患者,及时挽救他们的生命,减轻医疗负担。虽然将结构化指标和非结构化文本进行信息互补已被证明在疾病预测任务中行之有效,但仍存在一定的局限性。首先,单一分支模式的处理容易被忽视,从而影响最终的融合结果。其次,简单的融合会丢失模态间的互补信息,限制网络的学习能力。第三,不完整的可解释性会影响模型的实际应用和发展。为了克服这些难题,本文提出了利用 MIMIC-III 公共数据库预测患者预后的 MDL-HFP 多模态模型。首先,使用 ADASYN 算法处理数据类别的不平衡。然后,利用改进的 Deep&Cross 网络进行自动特征选择,对结构稀疏的特征进行编码,并在 HR-BGCN 模型的基础上引入隐式图结构信息,对非结构化的临床笔记进行编码。最后,通过跨模态动态交互层融合两种模态的信息。通过比较多个先进的多模态深度学习模型,验证了该模型的有效性,其平均 F1 得分为 90.42%,平均准确率为 90.70%。本文提出的模型可以准确地对患者的再入院状态进行分类,从而帮助医生做出判断,改善患者的预后。进一步的可视化分析表明了模型的可用性,为临床决策提供了全面的解释。
{"title":"Heart failure prognosis prediction: Let’s start with the MDL-HFP model","authors":"Huiting Ma ,&nbsp;Dengao Li ,&nbsp;Jian Fu ,&nbsp;Guiji Zhao ,&nbsp;Jumin Zhao","doi":"10.1016/j.is.2024.102408","DOIUrl":"10.1016/j.is.2024.102408","url":null,"abstract":"<div><p>Heart failure, as a critical symptom or terminal stage of assorted heart diseases, is a world-class public health problem. Establishing a prognostic model can help identify high dangerous patients, save their lives promptly, and reduce medical burden. Although integrating structured indicators and unstructured text for complementary information has been proven effective in disease prediction tasks, there are still certain limitations. Firstly, the processing of single branch modes is easily overlooked, which can affect the final fusion result. Secondly, simple fusion will lose complementary information between modalities, limiting the network’s learning ability. Thirdly, incomplete interpretability can affect the practical application and development of the model. To overcome these challenges, this paper proposes the MDL-HFP multimodal model for predicting patient prognosis using the MIMIC-III public database. Firstly, the ADASYN algorithm is used to handle the imbalance of data categories. Then, the proposed improved Deep&amp;Cross Network is used for automatic feature selection to encode structured sparse features, and implicit graph structure information is introduced to encode unstructured clinical notes based on the HR-BGCN model. Finally, the information of the two modalities is fused through a cross-modal dynamic interaction layer. By comparing multiple advanced multimodal deep learning models, the model’s effectiveness is verified, with an average F1 score of 90.42% and an average accuracy of 90.70%. The model proposed in this paper can accurately classify the readmission status of patients, thereby assisting doctors in making judgments and improving the patient’s prognosis. Further visual analysis demonstrates the usability of the model, providing a comprehensive explanation for clinical decision-making.</p></div>","PeriodicalId":50363,"journal":{"name":"Information Systems","volume":"125 ","pages":"Article 102408"},"PeriodicalIF":3.7,"publicationDate":"2024-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141137614","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GAMA: A multi-graph-based anomaly detection framework for business processes via graph neural networks GAMA:基于图神经网络的业务流程多图异常检测框架
IF 3.7 2区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-05-19 DOI: 10.1016/j.is.2024.102405
Wei Guan, Jian Cao, Yang Gu, Shiyou Qian

Anomalies in business processes are inevitable for various reasons such as system failures and operator errors. Detecting anomalies is important for the management and optimization of business processes. However, prevailing anomaly detection approaches often fail to capture crucial structural information about the underlying process. To address this, we propose a multi-Graph based Anomaly detection fraMework for business processes via grAph neural networks, named GAMA. GAMA makes use of structural process information and attribute information in a more integrated way. In GAMA, multiple graphs are applied to model a trace in which each attribute is modeled as a separate graph. In particular, the graph constructed for the special attribute activity reflects the control flow. Then GAMA employs a multi-graph encoder and a multi-sequence decoder on multiple graphs to detect anomalies in terms of the reconstruction errors. Moreover, three teacher forcing styles are designed to enhance GAMA’s ability to reconstruct normal behaviors and thus improve detection performance. We conduct extensive experiments on both synthetic logs and real-life logs. The experiment results demonstrate that GAMA outperforms state-of-the-art methods for both trace-level and attribute-level anomaly detection.

由于系统故障和操作员失误等各种原因,业务流程中出现异常是不可避免的。检测异常对于管理和优化业务流程非常重要。然而,现有的异常检测方法往往无法捕捉到底层流程的关键结构信息。为了解决这个问题,我们提出了一种通过 grAph 神经网络进行业务流程多图异常检测的方法,命名为 GAMA。GAMA 以更综合的方式利用结构性流程信息和属性信息。在 GAMA 中,多个图被应用于跟踪建模,其中每个属性都作为一个单独的图建模。特别是,为特殊属性活动构建的图反映了控制流。然后,GAMA 在多个图上使用多图编码器和多序列解码器来检测重建错误方面的异常。此外,我们还设计了三种教师强制风格,以增强 GAMA 重构正常行为的能力,从而提高检测性能。我们在合成日志和真实日志上进行了大量实验。实验结果表明,在轨迹级和属性级异常检测方面,GAMA 都优于最先进的方法。
{"title":"GAMA: A multi-graph-based anomaly detection framework for business processes via graph neural networks","authors":"Wei Guan,&nbsp;Jian Cao,&nbsp;Yang Gu,&nbsp;Shiyou Qian","doi":"10.1016/j.is.2024.102405","DOIUrl":"https://doi.org/10.1016/j.is.2024.102405","url":null,"abstract":"<div><p>Anomalies in business processes are inevitable for various reasons such as system failures and operator errors. Detecting anomalies is important for the management and optimization of business processes. However, prevailing anomaly detection approaches often fail to capture crucial structural information about the underlying process. To address this, we propose a multi-Graph based Anomaly detection fraMework for business processes via grAph neural networks, named GAMA. GAMA makes use of structural process information and attribute information in a more integrated way. In GAMA, multiple graphs are applied to model a trace in which each attribute is modeled as a separate graph. In particular, the graph constructed for the special attribute <em>activity</em> reflects the control flow. Then GAMA employs a multi-graph encoder and a multi-sequence decoder on multiple graphs to detect anomalies in terms of the reconstruction errors. Moreover, three teacher forcing styles are designed to enhance GAMA’s ability to reconstruct normal behaviors and thus improve detection performance. We conduct extensive experiments on both synthetic logs and real-life logs. The experiment results demonstrate that GAMA outperforms state-of-the-art methods for both trace-level and attribute-level anomaly detection.</p></div>","PeriodicalId":50363,"journal":{"name":"Information Systems","volume":"124 ","pages":"Article 102405"},"PeriodicalIF":3.7,"publicationDate":"2024-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141083465","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TRGST: An enhanced generalized suffix tree for topological relations between paths TRGST:路径拓扑关系的增强型广义后缀树
IF 3.7 2区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-05-18 DOI: 10.1016/j.is.2024.102406
Carlos Quijada-Fuentes , M. Andrea Rodríguez , Diego Seco

This paper introduces the TRGST data structure, which is designed to handle queries related to topological relations between paths represented as sequences of stops in a network. As an example, these paths could correspond to stops on a public transport network, and a query of interest is to retrieve paths that share at least k consecutive stops. While topological relations among spatial objects have received extensive attention, the efficient processing of these relations in the context of trajectory paths, considering both time and space efficiency, remains a relatively less explored domain. Taking inspiration from pattern matching implementations, the TRGST data structure is constructed on the foundation of the Generalized Suffix Tree. Its purpose is to provide a compact representation of a set of paths and to efficiently handle topological relation queries by leveraging the pattern search capabilities inherent in this structure. The paper provides a detailed account of the structure and algorithms of TRGST, followed by a performance analysis utilizing both real and synthetic data. The results underscore the remarkable scalability of the TRGST in terms of both query time and space utilization.

本文介绍 TRGST 数据结构,该结构旨在处理与网络中以站点序列表示的路径之间的拓扑关系有关的查询。举例来说,这些路径可能对应于公共交通网络中的站点,我们感兴趣的查询是检索至少有 k 个连续站点的路径。虽然空间对象之间的拓扑关系已受到广泛关注,但在轨迹路径中如何高效处理这些关系,同时考虑时间和空间效率,仍是一个探索相对较少的领域。受模式匹配实现的启发,TRGST 数据结构是在广义后缀树的基础上构建的。其目的是提供一组路径的紧凑表示,并利用该结构固有的模式搜索功能高效处理拓扑关系查询。本文详细介绍了 TRGST 的结构和算法,随后利用真实数据和合成数据进行了性能分析。结果表明,TRGST 在查询时间和空间利用率方面都具有显著的可扩展性。
{"title":"TRGST: An enhanced generalized suffix tree for topological relations between paths","authors":"Carlos Quijada-Fuentes ,&nbsp;M. Andrea Rodríguez ,&nbsp;Diego Seco","doi":"10.1016/j.is.2024.102406","DOIUrl":"10.1016/j.is.2024.102406","url":null,"abstract":"<div><p>This paper introduces the <em>TRGST</em> data structure, which is designed to handle queries related to topological relations between paths represented as sequences of stops in a network. As an example, these paths could correspond to stops on a public transport network, and a query of interest is to retrieve paths that share at least <span><math><mi>k</mi></math></span> consecutive stops. While topological relations among spatial objects have received extensive attention, the efficient processing of these relations in the context of trajectory paths, considering both time and space efficiency, remains a relatively less explored domain. Taking inspiration from pattern matching implementations, the <em>TRGST</em> data structure is constructed on the foundation of the Generalized Suffix Tree. Its purpose is to provide a compact representation of a set of paths and to efficiently handle topological relation queries by leveraging the pattern search capabilities inherent in this structure. The paper provides a detailed account of the structure and algorithms of <em>TRGST</em>, followed by a performance analysis utilizing both real and synthetic data. The results underscore the remarkable scalability of the <em>TRGST</em> in terms of both query time and space utilization.</p></div>","PeriodicalId":50363,"journal":{"name":"Information Systems","volume":"125 ","pages":"Article 102406"},"PeriodicalIF":3.7,"publicationDate":"2024-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141144791","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MBDL: Exploring dynamic dependency among various types of behaviors for recommendation MBDL:探索各类推荐行为之间的动态依赖关系
IF 3.7 2区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-05-18 DOI: 10.1016/j.is.2024.102407
Hang Zhang, Mingxin Gan

Users have various behaviors on items, including page view, tag-as-favorite, add-to-cart, and purchase in online shopping platforms. These various types of behaviors reflect users’ different intentions, which also help learn their preferences on items in a recommender system. Although some multi-behavior recommendation methods have been proposed, two significant challenges have not been widely noticed: (i) capturing heterogeneous and dynamic preferences of users simultaneously from different types of behaviors; (ii) modeling the dynamic dependency among various types of behaviors. To overcome the above challenges, we propose a novel multi-behavior dynamic dependency learning method (MBDL) to explore the heterogeneity and dependency among various types of behavior sequences for recommendation. In brief, MBDL first uses a dual-channel interest encoder to learn the long-term interest representations and the evolution of short-term interests from the behavior-aware item sequences. Then, MBDL adopts a contrastive learning method to preserve the consistency of user’s long-term behavioral patterns, and a multi-head attention network to capture the dynamic dependency among short-term interactive behaviors. Finally, MBDL adaptively integrates the influence of long- and short-term interests to predict future user–item interactions. Experiments on two real-world datasets show that the proposed MBDL method outperforms state-of-the-art methods significantly on recommendation accuracy. Further ablation studies demonstrate the effectiveness of our model and the benefits of learning dynamic dependency among types of behaviors.

在网上购物平台中,用户对商品的行为多种多样,包括页面浏览、标记为收藏夹、添加到购物车和购买。这些不同类型的行为反映了用户的不同意图,也有助于在推荐系统中了解用户对商品的偏好。虽然已经提出了一些多行为推荐方法,但有两个重大挑战尚未引起广泛关注:(i) 从不同类型的行为中同时捕捉用户的异构和动态偏好;(ii) 模拟不同类型行为之间的动态依赖关系。为了克服上述挑战,我们提出了一种新颖的多行为动态依赖学习方法(MBDL)来探索用于推荐的各类行为序列之间的异质性和依赖性。简而言之,MBDL 首先使用双通道兴趣编码器从行为感知项目序列中学习长期兴趣表征和短期兴趣演变。然后,MBDL 采用对比学习法来保持用户长期行为模式的一致性,并采用多头注意力网络来捕捉短期互动行为之间的动态依赖关系。最后,MBDL 自适应地整合了长期和短期兴趣的影响,以预测用户与物品的未来互动。在两个真实世界数据集上进行的实验表明,所提出的 MBDL 方法在推荐准确性上明显优于最先进的方法。进一步的消融研究证明了我们模型的有效性以及学习行为类型之间动态依赖关系的益处。
{"title":"MBDL: Exploring dynamic dependency among various types of behaviors for recommendation","authors":"Hang Zhang,&nbsp;Mingxin Gan","doi":"10.1016/j.is.2024.102407","DOIUrl":"10.1016/j.is.2024.102407","url":null,"abstract":"<div><p>Users have various behaviors on items, including <em>page view</em>, <em>tag-as-favorite</em>, <em>add-to-cart</em>, and <em>purchase</em> in online shopping platforms. These various types of behaviors reflect users’ different intentions, which also help learn their preferences on items in a recommender system. Although some multi-behavior recommendation methods have been proposed, two significant challenges have not been widely noticed: (i) capturing heterogeneous and dynamic preferences of users simultaneously from different types of behaviors; (ii) modeling the dynamic dependency among various types of behaviors. To overcome the above challenges, we propose a novel multi-behavior dynamic dependency learning method (MBDL) to explore the heterogeneity and dependency among various types of behavior sequences for recommendation. In brief, MBDL first uses a dual-channel interest encoder to learn the long-term interest representations and the evolution of short-term interests from the behavior-aware item sequences. Then, MBDL adopts a contrastive learning method to preserve the consistency of user’s long-term behavioral patterns, and a multi-head attention network to capture the dynamic dependency among short-term interactive behaviors. Finally, MBDL adaptively integrates the influence of long- and short-term interests to predict future user–item interactions. Experiments on two real-world datasets show that the proposed MBDL method outperforms state-of-the-art methods significantly on recommendation accuracy. Further ablation studies demonstrate the effectiveness of our model and the benefits of learning dynamic dependency among types of behaviors.</p></div>","PeriodicalId":50363,"journal":{"name":"Information Systems","volume":"124 ","pages":"Article 102407"},"PeriodicalIF":3.7,"publicationDate":"2024-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141143297","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Information Systems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1