首页 > 最新文献

Data & Knowledge Engineering最新文献

英文 中文
Generating psychological analysis tables for children's drawings using deep learning 利用深度学习生成儿童绘画心理分析表
IF 2.5 3区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-12-06 DOI: 10.1016/j.datak.2023.102266
Moonyoung Lee , Youngho Kim , Young-Kuk Kim

The usefulness of drawing-based psychological testing has been demonstrated in a variety of studies. By using the familiar medium of drawing, drawing-based psychological testing can be applied to a wide range of age groups and is particularly effective with children who have difficulty expressing themselves verbally. Drawing tests are usually implemented face-to-face, requiring specialized counseling staff, and can be time-consuming and expensive to apply to large numbers of children. These problems seem to be solved by applying highly developed artificial intelligence techniques. If artificial intelligence (AI) can analyze children's drawings and perform psychological analysis, it will be possible to use it as a service and take tests online or through smartphones. There have been various attempts to automate the drawing of psychological tests by utilizing deep learning technology to process images. Previous studies using classification have been limited in their ability to extract structural information. In this paper, we analyze the House-Tree-Person Test (HTP), one of the drawing psychological tests widely used in clinical practice, by utilizing object detection technology that can extract more diverse information from images. In addition, we extend the existing research that has been limited to the extraction of relatively simple psychological features and generate a psychological analysis table based on the extracted features that can be used to assist experts in the process of psychological testing. Our research findings indicate that the object detection performance achieves a mean Average Precision (mAP) of approximately 92.6∼94.1 %, and the average accuracy of the psychological analysis table is 94.4 %.

绘画心理测试的实用性已在多项研究中得到证实。通过使用人们熟悉的绘画媒介,绘画心理测试可适用于广泛的年龄组,对那些难以用语言表达自己的儿童尤其有效。绘画测试通常是面对面进行的,需要专门的心理辅导人员,而且要对大量儿童进行测试,既费时又费钱。应用高度发达的人工智能技术似乎可以解决这些问题。如果人工智能(AI)能够分析儿童的绘画并进行心理分析,那么就有可能将其作为一种服务,通过在线或智能手机进行测试。利用深度学习技术处理图像,实现心理测试画图自动化的尝试有很多。以往利用分类技术进行的研究在提取结构信息方面能力有限。在本文中,我们利用对象检测技术分析了临床上广泛使用的绘画心理测试之一--"房子-树-人 "测试(HTP),该技术可以从图像中提取更多不同的信息。此外,我们还扩展了仅限于提取相对简单的心理特征的现有研究,并根据提取的特征生成了心理分析表,可用于在心理测试过程中辅助专家。我们的研究结果表明,物体检测性能的平均精度(mAP)约为 92.6∼94.1%,心理分析表的平均精度为 94.4%。
{"title":"Generating psychological analysis tables for children's drawings using deep learning","authors":"Moonyoung Lee ,&nbsp;Youngho Kim ,&nbsp;Young-Kuk Kim","doi":"10.1016/j.datak.2023.102266","DOIUrl":"10.1016/j.datak.2023.102266","url":null,"abstract":"<div><p>The usefulness of drawing-based psychological testing has been demonstrated in a variety of studies. By using the familiar medium of drawing, drawing-based psychological testing can be applied to a wide range of age groups and is particularly effective with children who have difficulty expressing themselves verbally. Drawing tests are usually implemented face-to-face, requiring specialized counseling staff, and can be time-consuming and expensive to apply to large numbers of children. These problems seem to be solved by applying highly developed artificial intelligence<span> techniques. If artificial intelligence (AI) can analyze children's drawings and perform psychological analysis, it will be possible to use it as a service and take tests online or through smartphones. There have been various attempts to automate the drawing of psychological tests by utilizing deep learning technology to process images. Previous studies using classification have been limited in their ability to extract structural information. In this paper, we analyze the House-Tree-Person Test (HTP), one of the drawing psychological tests widely used in clinical practice, by utilizing object detection technology that can extract more diverse information from images. In addition, we extend the existing research that has been limited to the extraction of relatively simple psychological features and generate a psychological analysis table based on the extracted features that can be used to assist experts in the process of psychological testing. Our research findings indicate that the object detection performance achieves a mean Average Precision (mAP) of approximately 92.6∼94.1 %, and the average accuracy of the psychological analysis table is 94.4 %.</span></p></div>","PeriodicalId":55184,"journal":{"name":"Data & Knowledge Engineering","volume":"149 ","pages":"Article 102266"},"PeriodicalIF":2.5,"publicationDate":"2023-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138546528","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Blockchain-based ontology driven reference framework for security risk management 基于区块链本体驱动的安全风险管理参考框架
IF 2.5 3区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-12-04 DOI: 10.1016/j.datak.2023.102257
Mubashar Iqbal , Aleksandr Kormiltsyn , Vimal Dwivedi , Raimundas Matulevičius

Security risk management (SRM) is crucial for protecting valuable assets from malicious harm. While blockchain technology has been proposed to mitigate security threats in traditional applications, it is not a perfect solution, and its security threats must be managed. This paper addresses the research problem of having no unified and formal knowledge models to support the SRM of traditional applications using blockchain and the SRM of blockchain-based applications. In accordance with this, we present a blockchain-based reference model (BbRM) and an ontology driven reference framework (OntReF) for the SRM of traditional and blockchain-based applications. The BbRM consolidates security threats of traditional and blockchain-based applications, structured following the SRM domain model and offers guidance for creating the OntReF using the domain model. OntReF is grounded on unified foundational ontology (UFO) and provides semantic interoperability and supporting the dynamic knowledge representation and instantiation of information security knowledge for the SRM. Our evaluation approaches demonstrate that OntReF is practical to use.

安全风险管理(SRM)对于保护有价值的资产免受恶意损害至关重要。虽然区块链技术已被提出用于减轻传统应用程序中的安全威胁,但它并不是一个完美的解决方案,必须对其安全威胁进行管理。本文解决了传统区块链应用程序的SRM和基于区块链的应用程序的SRM没有统一的形式化知识模型支持的研究问题。据此,我们提出了一个基于区块链的参考模型(BbRM)和一个本体驱动的参考框架(OntReF),用于传统和基于区块链的应用程序的SRM。BbRM整合了传统和基于区块链的应用程序的安全威胁,遵循SRM领域模型进行结构化,并为使用该领域模型创建OntReF提供指导。OntReF以统一基础本体(UFO)为基础,为SRM提供语义互操作性,并支持信息安全知识的动态知识表示和实例化。我们的评估方法表明OntReF是实用的。
{"title":"Blockchain-based ontology driven reference framework for security risk management","authors":"Mubashar Iqbal ,&nbsp;Aleksandr Kormiltsyn ,&nbsp;Vimal Dwivedi ,&nbsp;Raimundas Matulevičius","doi":"10.1016/j.datak.2023.102257","DOIUrl":"10.1016/j.datak.2023.102257","url":null,"abstract":"<div><p>Security risk management<span><span> (SRM) is crucial for protecting valuable assets from malicious harm. While blockchain technology has been proposed to mitigate security threats in traditional applications, it is not a perfect solution, and its security threats must be managed. This paper addresses the research problem of having no unified and formal knowledge models to support the SRM of traditional applications using blockchain and the SRM of blockchain-based applications. In accordance with this, we present a blockchain-based reference model (BbRM) and an ontology driven reference framework (OntReF) for the SRM of traditional and blockchain-based applications. The BbRM consolidates security threats of traditional and blockchain-based applications, structured following the SRM domain model and offers guidance for creating the OntReF using the domain model. OntReF is grounded on unified foundational ontology (UFO) and provides semantic interoperability and supporting the dynamic knowledge representation and </span>instantiation of information security knowledge for the SRM. Our evaluation approaches demonstrate that OntReF is practical to use.</span></p></div>","PeriodicalId":55184,"journal":{"name":"Data & Knowledge Engineering","volume":"149 ","pages":"Article 102257"},"PeriodicalIF":2.5,"publicationDate":"2023-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138534223","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Integrated detection and localization of concept drifts in process mining with batch and stream trace clustering support 基于批和流轨迹聚类支持的过程挖掘中概念漂移的集成检测与定位
IF 2.5 3区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-12-02 DOI: 10.1016/j.datak.2023.102253
Rafael Gaspar de Sousa , Antonio Carlos Meira Neto , Marcelo Fantinato , Sarajane Marques Peres , Hajo Alexander Reijers

Process mining can help organizations by extracting knowledge from event logs. However, process mining techniques often assume business processes are stationary, while actual business processes are constantly subject to change because of the complexity of organizations and their external environment. Thus, addressing process changes over time – known as concept drifts – allows for a better understanding of process behavior and can provide a competitive edge for organizations, especially in an online data stream scenario. Current approaches to handling process concept drift focus primarily on detecting and locating concept drifts, often through an integrated, albeit offline, approach. However, part of these integrated approaches rely on complex data structures related to tree-based process models, usually discovered through algorithms whose results are influenced by specific heuristic rules. Moreover, most of the proposed approaches have not been tested on public true concept drift-labeled event logs commonly used as benchmark, making comparative analysis difficult. In this article, we propose an online approach to detect and localize concept drifts in an integrated way using batch and stream trace clustering support. In our approach, cluster models provide input information for both concept drift detection and localization methods. Each cluster abstracts a behavior profile underlying the process and reveals descriptive information about the discovered concept drifts. Experiments with benchmark synthetic event logs with different control-flow changes, as well as with real-world event logs, showed that our approach, when relying on the same clustering model, is competitive in relation to baselines concept drift detection method. In addition, the experiment showed our approach is able to correctly locate the concept drifts detected and allows the analysis of such concept drifts through different process behavior profiles.

流程挖掘可以通过从事件日志中提取知识来帮助组织。然而,流程挖掘技术通常假设业务流程是固定的,而实际的业务流程由于组织及其外部环境的复杂性而不断变化。因此,处理随时间变化的过程变化——称为概念漂移——允许更好地理解过程行为,并且可以为组织提供竞争优势,特别是在在线数据流场景中。目前处理过程概念漂移的方法主要集中于检测和定位概念漂移,通常是通过一种集成的(尽管是离线的)方法。然而,这些集成方法的一部分依赖于与基于树的过程模型相关的复杂数据结构,通常通过算法发现,其结果受特定启发式规则的影响。此外,大多数提出的方法尚未在通常用作基准的公开真实概念漂移标记事件日志上进行测试,这使得比较分析变得困难。在本文中,我们提出了一种在线方法,利用批处理和流跟踪聚类支持,以集成的方式检测和定位概念漂移。在我们的方法中,聚类模型为概念漂移检测和定位方法提供输入信息。每个聚类都抽象出一个过程底层的行为概况,并揭示关于发现的概念漂移的描述性信息。对具有不同控制流变化的基准合成事件日志以及实际事件日志进行的实验表明,当依赖于相同的聚类模型时,我们的方法与基线概念漂移检测方法相比具有竞争力。此外,实验表明,我们的方法能够正确定位检测到的概念漂移,并允许通过不同的过程行为配置文件分析这种概念漂移。
{"title":"Integrated detection and localization of concept drifts in process mining with batch and stream trace clustering support","authors":"Rafael Gaspar de Sousa ,&nbsp;Antonio Carlos Meira Neto ,&nbsp;Marcelo Fantinato ,&nbsp;Sarajane Marques Peres ,&nbsp;Hajo Alexander Reijers","doi":"10.1016/j.datak.2023.102253","DOIUrl":"10.1016/j.datak.2023.102253","url":null,"abstract":"<div><p><span>Process mining can help organizations by extracting knowledge from event logs. However, process mining techniques often assume business processes are stationary, while actual business processes are constantly subject to change because of the complexity of organizations and their external environment. Thus, addressing process changes over time – known as </span><em>concept drifts</em><span><span><span><span> – allows for a better understanding of process behavior and can provide a competitive edge for organizations, especially in an online data stream scenario. Current approaches to handling process concept drift focus primarily on detecting and locating concept drifts, often through an integrated, albeit offline, approach. However, part of these integrated approaches rely on complex </span>data structures<span> related to tree-based process models, usually discovered through algorithms whose results are influenced by specific heuristic rules. Moreover, most of the proposed approaches have not been tested on public true concept drift-labeled event logs commonly used as benchmark, making comparative analysis difficult. In this article, we propose an online approach to detect and localize concept drifts in an integrated way using batch and stream trace clustering support. In our approach, cluster models provide input information for both concept drift detection and </span></span>localization methods. Each cluster abstracts a behavior profile underlying the process and reveals </span>descriptive information about the discovered concept drifts. Experiments with benchmark synthetic event logs with different control-flow changes, as well as with real-world event logs, showed that our approach, when relying on the same clustering model, is competitive in relation to baselines concept drift detection method. In addition, the experiment showed our approach is able to correctly locate the concept drifts detected and allows the analysis of such concept drifts through different process behavior profiles.</span></p></div>","PeriodicalId":55184,"journal":{"name":"Data & Knowledge Engineering","volume":"149 ","pages":"Article 102253"},"PeriodicalIF":2.5,"publicationDate":"2023-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138534225","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Editorial for VSI:NLDB-saarbruecken-2021 VSI:NLDB-saarbruecken-2021 的社论
IF 2.5 3区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-11-30 DOI: 10.1016/j.datak.2023.102259
Helmut Horacek , Epaminondas Kapetanios , Elisabeth Metais , Farid Meziane
{"title":"Editorial for VSI:NLDB-saarbruecken-2021","authors":"Helmut Horacek ,&nbsp;Epaminondas Kapetanios ,&nbsp;Elisabeth Metais ,&nbsp;Farid Meziane","doi":"10.1016/j.datak.2023.102259","DOIUrl":"https://doi.org/10.1016/j.datak.2023.102259","url":null,"abstract":"","PeriodicalId":55184,"journal":{"name":"Data & Knowledge Engineering","volume":"149 ","pages":"Article 102259"},"PeriodicalIF":2.5,"publicationDate":"2023-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138474745","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DeepScraper: A complete and efficient tweet scraping method using authenticated multiprocessing DeepScraper:一个完整而高效的推文抓取方法,使用身份验证的多处理
IF 2.5 3区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-11-30 DOI: 10.1016/j.datak.2023.102260
Jaebeom You , Kisung Lee , Hyuk-Yoon Kwon

In this paper, we propose a scraping method for collecting tweets, which we call DeepScraper. DeepScraper provides the complete scraping for the entire tweets written by a certain group of users or them containing search keywords with a fast speed. To improve the crawling speed of DeepScraper, we devise a multiprocessing architecture while providing authentication to the multiple processes based on the simulation of the user access behavior to Twitter. This allows us to maximize the parallelism of crawling even in a single machine. Through extensive experiments, we show that DeepScraper can crawl the entire tweets of 99 users, which amounts to 5,798,052 tweets while Twitter standard API can crawl only 243,650 tweets of them due to the constraints of the number of tweets to scrape. In other words, DeepScraper could collect 23.7 times more tweets for the 99 users than the standard API. We also show the efficiency of DeepScraper. First, we show the effect of the authenticated multiprocessing by showing that it increases the crawling speed from 2.0310.57 times as the number of running processes increases from 2 to 32 compared to DeepScraper with a single process. Then, we compare the crawling speed of DeepScraper with the existing studies. The result shows that DeepScraper is compared to even Twitter standard APIs and Twitter4J while DeepScraper can scrape much more tweets than them. Furthermore, DeepScraper is much faster than Twitter Scrapy roughly 3.69 times while both can scrape the entire tweets for the target users or keywords.

在本文中,我们提出了一种收集推文的抓取方法,我们称之为DeepScraper。DeepScraper提供了对某一组用户或他们所写的包含搜索关键字的整个推文的完整抓取,速度很快。为了提高DeepScraper的爬行速度,我们设计了一个多处理架构,同时基于Twitter用户访问行为的模拟为多个进程提供身份验证。这允许我们在单个机器上最大化爬行的并行性。通过大量的实验,我们发现DeepScraper可以抓取99个用户的全部推文,总计5798052条推文,而Twitter标准API由于抓取推文数量的限制,只能抓取其中的243650条推文。换句话说,DeepScraper可以为99个用户收集比标准API多23.7倍的推文。我们还展示了DeepScraper的效率。首先,我们展示了经过身份验证的多处理的效果,与使用单个进程的DeepScraper相比,当运行进程的数量从2增加到32时,它将爬行速度从2.03倍提高到10.57倍。然后,我们将DeepScraper的爬行速度与已有研究进行了比较。结果表明,DeepScraper甚至可以与Twitter标准api和Twitter4J进行比较,而DeepScraper可以抓取比它们更多的推文。此外,DeepScraper比Twitter Scrapy快得多,大约是3.69倍,而两者都可以为目标用户或关键字抓取整条推文。
{"title":"DeepScraper: A complete and efficient tweet scraping method using authenticated multiprocessing","authors":"Jaebeom You ,&nbsp;Kisung Lee ,&nbsp;Hyuk-Yoon Kwon","doi":"10.1016/j.datak.2023.102260","DOIUrl":"10.1016/j.datak.2023.102260","url":null,"abstract":"<div><p>In this paper, we propose a scraping method for collecting tweets, which we call <em>DeepScraper</em><span>. DeepScraper provides the complete scraping for the entire tweets written by a certain group of users or them containing search keywords<span> with a fast speed. To improve the crawling speed of DeepScraper, we devise a multiprocessing architecture while providing authentication<span> to the multiple processes based on the simulation of the user access behavior to Twitter. This allows us to maximize the parallelism of crawling even in a single machine. Through extensive experiments, we show that DeepScraper can crawl the entire tweets of 99 users, which amounts to 5,798,052 tweets while Twitter standard API can crawl only 243,650 tweets of them due to the constraints of the number of tweets to scrape. In other words, DeepScraper could collect 23.7 times more tweets for the 99 users than the standard API. We also show the efficiency of DeepScraper. First, we show the effect of the authenticated multiprocessing by showing that it increases the crawling speed from 2.03</span></span></span><span><math><mo>∼</mo></math></span>10.57 times as the number of running processes increases from 2 to 32 compared to DeepScraper with a single process. Then, we compare the crawling speed of DeepScraper with the existing studies. The result shows that DeepScraper is compared to even Twitter standard APIs and Twitter4J while DeepScraper can scrape much more tweets than them. Furthermore, DeepScraper is much faster than Twitter Scrapy roughly 3.69 times while both can scrape the entire tweets for the target users or keywords.</p></div>","PeriodicalId":55184,"journal":{"name":"Data & Knowledge Engineering","volume":"149 ","pages":"Article 102260"},"PeriodicalIF":2.5,"publicationDate":"2023-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138534226","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
S_IDS: An efficient skyline query algorithm over incomplete data streams S_IDS:在不完整数据流上高效的skyline查询算法
IF 2.5 3区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-11-30 DOI: 10.1016/j.datak.2023.102258
Mei Bai, Yuxue Han, Peng Yin, Xite Wang, Guanyu Li, Bo Ning, Qian Ma

The efficient processing of mass stream data has attracted wide attention in the database field. The skyline query on the sensor data stream can monitor multiple targets in real time, to avoid abnormal events such as fire and explosion, which is very useful in the practical application of sensor data monitoring. However, real-world stream data may often contain incomplete data attributes due to faulty sensing devices or imperfect data collection techniques. Skyline queries over incomplete data streams may lead to a lack of transitivity and loop domination issues. To solve the problem of the skyline query over incomplete data streams, firstly, this paper uses differential dependency rule (DD) to fill the missing attribute values of data in the incomplete data stream. Then, the hierarchical grid index (HGrid) is introduced into the field of skyline query to improve pruning efficiency. In the process of skyline calculation, this paper only keeps as few calculation results as possible for the data that may affect the result to avoid a large number of repeated calculations. Thus, S_IDS (Skyline query algorithm over Incomplete Data Stream) is proposed to query skyline results with high confidence from the incomplete data stream. Finally, by comparing with the most advanced skyline query algorithms over incomplete data streams, the correctness and efficiency of the proposed S_IDS algorithm are proved.

海量流数据的高效处理已经引起了数据库领域的广泛关注。对传感器数据流进行天际线查询,可以实时监控多个目标,避免火灾、爆炸等异常事件的发生,在传感器数据监控的实际应用中非常有用。然而,现实世界的流数据可能经常包含不完整的数据属性,这是由于故障的传感设备或不完善的数据收集技术。对不完整数据流的Skyline查询可能导致传递性不足和循环支配问题。为了解决不完整数据流上的天际线查询问题,首先利用差分依赖规则(DD)填充不完整数据流中缺失的数据属性值;然后,将层次网格索引(HGrid)引入天际线查询领域,以提高剪枝效率。在天际线计算过程中,对于可能影响结果的数据,本文只保留尽可能少的计算结果,以避免大量的重复计算。为此,提出S_IDS (Skyline query algorithm over Incomplete Data Stream)算法,从不完整数据流中以高置信度查询Skyline结果。最后,通过与目前最先进的不完全数据流天际线查询算法的比较,验证了S_IDS算法的正确性和有效性。
{"title":"S_IDS: An efficient skyline query algorithm over incomplete data streams","authors":"Mei Bai,&nbsp;Yuxue Han,&nbsp;Peng Yin,&nbsp;Xite Wang,&nbsp;Guanyu Li,&nbsp;Bo Ning,&nbsp;Qian Ma","doi":"10.1016/j.datak.2023.102258","DOIUrl":"10.1016/j.datak.2023.102258","url":null,"abstract":"<div><p>The efficient processing of mass stream data has attracted wide attention in the database field. The skyline query on the sensor data stream can monitor multiple targets in real time, to avoid abnormal events such as fire and explosion, which is very useful in the practical application of sensor data monitoring. However, real-world stream data may often contain incomplete data attributes due to faulty sensing devices or imperfect data collection techniques. Skyline queries over incomplete data streams may lead to a lack of transitivity and loop domination issues. To solve the problem of the skyline query over incomplete data streams, firstly, this paper uses differential dependency rule (DD) to fill the missing attribute values of data in the incomplete data stream. Then, the hierarchical grid index (HGrid) is introduced into the field of skyline query to improve pruning efficiency. In the process of skyline calculation, this paper only keeps as few calculation results as possible for the data that may affect the result to avoid a large number of repeated calculations. Thus, S_IDS (Skyline query algorithm over Incomplete Data Stream) is proposed to query skyline results with high confidence from the incomplete data stream. Finally, by comparing with the most advanced skyline query algorithms over incomplete data streams, the correctness and efficiency of the proposed S_IDS algorithm are proved.</p></div>","PeriodicalId":55184,"journal":{"name":"Data & Knowledge Engineering","volume":"149 ","pages":"Article 102258"},"PeriodicalIF":2.5,"publicationDate":"2023-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138534229","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Explainable influenza forecasting scheme using DCC-based feature selection 基于dcc特征选择的可解释流感预测方案
IF 2.5 3区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-11-26 DOI: 10.1016/j.datak.2023.102256
Sungwoo Park , Jaeuk Moon , Seungwon Jung , Seungmin Rho , Eenjun Hwang

As influenza is easily converted to another type of virus and spreads very quickly from person to person, it is more likely to develop into a pandemic. Even though vaccines are the most effective way to prevent influenza, it takes a lot of time to produce them. Due to this, there has been an imbalance in the supply and demand of influenza vaccines every year. For a smooth vaccine supply, it is necessary to accurately forecast vaccine demand at least three to six months in advance. So far, many machine learning-based predictive models have shown excellent performance. However, their use was limited due to performance deterioration due to inappropriate training data and inability to explain the results. To solve these problems, in this paper, we propose an explainable influenza forecasting model. In particular, the model selects highly related data based on the distance correlation coefficient for effective training and explains the prediction results using shapley additive explanations. We evaluated its performance through extensive experiments. We report some of the results.

由于流感很容易转化为另一种病毒,并在人与人之间迅速传播,因此更有可能发展成大流行。尽管疫苗是预防流感最有效的方法,但生产疫苗需要很长时间。因此,每年流感疫苗的供应和需求都不平衡。为了保证疫苗供应的顺利进行,至少要提前3 ~ 6个月准确预测疫苗需求。到目前为止,许多基于机器学习的预测模型都表现出了出色的性能。然而,由于不适当的训练数据和无法解释结果导致性能下降,它们的使用受到限制。为了解决这些问题,本文提出了一个可解释的流感预测模型。特别是,该模型根据距离相关系数选择高度相关的数据进行有效训练,并使用shapley加性解释解释预测结果。我们通过大量的实验来评估它的性能。我们报道一些结果。
{"title":"Explainable influenza forecasting scheme using DCC-based feature selection","authors":"Sungwoo Park ,&nbsp;Jaeuk Moon ,&nbsp;Seungwon Jung ,&nbsp;Seungmin Rho ,&nbsp;Eenjun Hwang","doi":"10.1016/j.datak.2023.102256","DOIUrl":"https://doi.org/10.1016/j.datak.2023.102256","url":null,"abstract":"<div><p>As influenza is easily converted to another type of virus and spreads very quickly from person to person, it is more likely to develop into a pandemic. Even though vaccines are the most effective way to prevent influenza, it takes a lot of time to produce them. Due to this, there has been an imbalance in the supply and demand of influenza vaccines every year. For a smooth vaccine supply, it is necessary to accurately forecast vaccine demand at least three to six months in advance. So far, many machine learning-based predictive models have shown excellent performance. However, their use was limited due to performance deterioration due to inappropriate training data and inability to explain the results. To solve these problems, in this paper, we propose an explainable influenza forecasting model. In particular, the model selects highly related data based on the distance correlation coefficient for effective training and explains the prediction results using shapley additive explanations. We evaluated its performance through extensive experiments. We report some of the results.</p></div>","PeriodicalId":55184,"journal":{"name":"Data & Knowledge Engineering","volume":"149 ","pages":"Article 102256"},"PeriodicalIF":2.5,"publicationDate":"2023-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138471983","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A-MKMC: An effective adaptive-based multilevel K-means clustering with optimal centroid selection using hybrid heuristic approach for handling the incomplete data A-MKMC:一种有效的基于自适应的多级k -均值聚类方法,采用混合启发式方法进行最优质心选择,用于处理不完整数据
IF 2.5 3区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-11-22 DOI: 10.1016/j.datak.2023.102243
Hima Vijayan , Subramaniam M , Sathiyasekar K

In general, clustering is defined as partitioning similar and dissimilar objects into several groups. It has been widely used in applications like pattern recognition, image processing, and data analysis. When the dataset contains some missing data or value, it is termed incomplete data. In such implications, the incomplete dataset issue is untreatable while validating the data. Due to these flaws, the quality or standard level of the data gets an impact. Hence, the handling of missing values is done by influencing the clustering mechanisms for sorting out the missing data. Yet, the traditional clustering algorithms fail to combat the issues as it is not supposed to maintain large dimensional data. It is also caused by errors of human intervention or inaccurate outcomes. To alleviate the challenging issue of incomplete data, a novel clustering algorithm is proposed. Initially, incomplete or mixed data is garnered from the five different standard data sources. Once the data is to be collected, it is undergone the pre-processing phase, which is accomplished using data normalization. Subsequently, the final step is processed by the new clustering algorithm that is termed Adaptive centroid based Multilevel K-Means Clustering (A-MKMC), in which the cluster centroid is optimized by integrating the two conventional algorithms such as Border Collie Optimization (BCO) and Whale Optimization Algorithm (WOA) named as Hybrid Border Collie Whale Optimization (HBCWO). Therefore, the validation of the novel clustering model is estimated using various measures and compared against traditional mechanisms. From the overall result analysis, the accuracy and precision of the designed HBCWO-A-MKMC method attain 93 % and 95 %. Hence, the adaptive clustering process exploits the higher performance that aids in sorting out the missing data issuecompared to the other conventional methods.

通常,聚类被定义为将相似和不相似的对象划分为几个组。它被广泛应用于模式识别、图像处理和数据分析等领域。当数据集包含一些缺失的数据或值时,它被称为不完整数据。在这种情况下,验证数据时无法处理不完整的数据集问题。由于这些缺陷,数据的质量或标准水平受到影响。因此,对缺失值的处理是通过影响分类缺失数据的聚类机制来完成的。然而,传统的聚类算法无法解决这些问题,因为它不应该维护大维度的数据。它也由人为干预的错误或不准确的结果引起。为了解决数据不完整的难题,提出了一种新的聚类算法。最初,从五个不同的标准数据源收集不完整或混合的数据。一旦要收集数据,它就会经历预处理阶段,这是使用数据规范化来完成的。最后一步采用基于自适应质心的多层次k -均值聚类(A-MKMC)聚类算法,将边界牧羊犬优化算法(BCO)和鲸鱼优化算法(WOA)结合起来进行聚类质心的优化,称为混合边界牧羊犬鲸鱼优化算法(HBCWO)。因此,使用各种度量来估计新聚类模型的有效性,并与传统机制进行比较。从总体结果分析来看,所设计的HBCWO-A-MKMC方法的准确度和精密度分别达到93%和95%。因此,与其他传统方法相比,自适应聚类过程利用了更高的性能,有助于整理丢失的数据问题。
{"title":"A-MKMC: An effective adaptive-based multilevel K-means clustering with optimal centroid selection using hybrid heuristic approach for handling the incomplete data","authors":"Hima Vijayan ,&nbsp;Subramaniam M ,&nbsp;Sathiyasekar K","doi":"10.1016/j.datak.2023.102243","DOIUrl":"10.1016/j.datak.2023.102243","url":null,"abstract":"<div><p><span><span>In general, clustering is defined as partitioning similar and dissimilar objects into several groups. It has been widely used in applications like pattern recognition, image processing, and data analysis. When the dataset contains some missing data or value, it is termed incomplete data. In such implications, the incomplete dataset issue is untreatable while validating the data. Due to these flaws, the quality or standard level of the data gets an impact. Hence, the handling of missing values is done by influencing the clustering mechanisms for sorting out the missing data. Yet, the traditional </span>clustering algorithms<span> fail to combat the issues as it is not supposed to maintain large dimensional data. It is also caused by errors of human intervention or inaccurate outcomes. To alleviate the challenging issue of incomplete data, a novel clustering algorithm is proposed. Initially, incomplete or mixed data is garnered from the five different standard data sources. Once the data is to be collected, it is undergone the pre-processing phase, which is accomplished using data normalization. Subsequently, the final step is processed by the new clustering algorithm that is termed Adaptive centroid based Multilevel K-Means Clustering (A-MKMC), in which the cluster centroid is optimized by integrating the two conventional algorithms such as Border Collie Optimization (BCO) and </span></span>Whale Optimization Algorithm<span> (WOA) named as Hybrid Border Collie Whale Optimization (HBCWO). Therefore, the validation of the novel clustering model is estimated using various measures and compared against traditional mechanisms. From the overall result analysis, the accuracy and precision of the designed HBCWO-A-MKMC method attain 93 % and 95 %. Hence, the adaptive clustering process exploits the higher performance that aids in sorting out the missing data issuecompared to the other conventional methods.</span></p></div>","PeriodicalId":55184,"journal":{"name":"Data & Knowledge Engineering","volume":"150 ","pages":"Article 102243"},"PeriodicalIF":2.5,"publicationDate":"2023-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138534224","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Global and item-by-item reasoning fusion-based multi-hop KGQA 基于全局逐项推理融合的多跳KGQA
IF 2.5 3区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-11-20 DOI: 10.1016/j.datak.2023.102244
Tongzhao Xu, Turdi Tohti, Askar Hamdulla

Existing embedded multi-hop Question Answering over Knowledge Graph (KGQA) methods attempted to handle Knowledge Graph (KG) sparsity using Knowledge Graph Embedding (KGE) to improve KGQA. However, they almost ignore the intermediate path reasoning process of answer prediction, do not consider the information interaction between the question and the KG, and rarely consider the problem that the triple scoring reasoning mechanism is inadequate in extracting deep features. To address the above issues, this paper proposes Global and Item-by-item Reasoning Fusion-based Multi-hop KGQA (GIRFM-KGQA). In global reasoning, a convolutional attention reasoning mechanism is proposed and fused with the triple scoring reasoning mechanism to jointly implement global reasoning, thus enhancing the long-chain reasoning ability of the global reasoning model. In item-by-item reasoning, the reasoning path is formed by serially predicting relations, and then the answer is predicted, which effectively solves the problem that the embedded multi-hop KGQA method lacks the intermediate path reasoning ability. In addition, we introduce an information interaction method between the question and the KG to improve the accuracy of the answer prediction. Finally, we merge the global reasoning score with the item-by-item reasoning score to jointly predict the answer. Our model, compared to the baseline model (EmbedKGQA), achieves an accuracy improvement of 0.5% and 2.7% on two-hop questions, and 6.2% and 4.6% on three-hop questions for the MetaQA_Full and MetaQA_Half datasets, and 1.7% on the WebQuestionSP dataset, respectively. The experimental results show that the proposed model can effectively improve the accuracy of the multi-hop KGQA model and enhance the interpretability of the model. We have made our model’s source code available at github: https://github.com/feixiongfeixiong/GIRFM.

现有的嵌入式知识图多跳问答(KGQA)方法试图利用知识图嵌入(KGE)来处理知识图的稀疏性,以改进知识图问答。然而,他们几乎忽略了答案预测的中间路径推理过程,没有考虑问题与KG之间的信息交互,很少考虑三分推理机制在提取深层特征方面存在不足的问题。针对上述问题,本文提出了基于全局逐项推理融合的多跳KGQA (GIRFM-KGQA)算法。在全局推理中,提出了卷积注意推理机制,并与三重评分推理机制融合,共同实现全局推理,增强了全局推理模型的长链推理能力。在逐项推理中,通过序列预测关系形成推理路径,进而对答案进行预测,有效解决了嵌入式多跳KGQA方法缺乏中间路径推理能力的问题。此外,我们引入了问题与KG之间的信息交互方法,以提高答案预测的准确性。最后,我们将全局推理得分与逐项推理得分合并,共同预测答案。与基线模型(EmbedKGQA)相比,我们的模型在MetaQA_Full和MetaQA_Half数据集的两跳问题上分别提高了0.5%和2.7%,在三跳问题上分别提高了6.2%和4.6%,在WebQuestionSP数据集上分别提高了1.7%。实验结果表明,该模型能有效提高多跳KGQA模型的精度,增强模型的可解释性。我们已经在github上提供了模型的源代码:https://github.com/feixiongfeixiong/GIRFM。
{"title":"Global and item-by-item reasoning fusion-based multi-hop KGQA","authors":"Tongzhao Xu,&nbsp;Turdi Tohti,&nbsp;Askar Hamdulla","doi":"10.1016/j.datak.2023.102244","DOIUrl":"https://doi.org/10.1016/j.datak.2023.102244","url":null,"abstract":"<div><p><span><span><span>Existing embedded multi-hop Question Answering over Knowledge Graph (KGQA) methods attempted to handle Knowledge Graph (KG) sparsity using Knowledge Graph Embedding (KGE) to improve KGQA. However, they almost ignore the intermediate path reasoning process of answer prediction, do not consider the information interaction between the question and the KG, and rarely consider the problem that the triple scoring reasoning mechanism is inadequate in extracting deep features. To address the above issues, this paper proposes Global and Item-by-item Reasoning Fusion-based Multi-hop KGQA (GIRFM-KGQA). In global reasoning, a convolutional attention reasoning mechanism is proposed and fused with the triple scoring reasoning mechanism to jointly implement global reasoning, thus enhancing the long-chain reasoning ability of the global reasoning model. In item-by-item reasoning, the reasoning path is formed by serially predicting relations, and then the answer is predicted, which effectively solves the problem that the embedded multi-hop KGQA method lacks the intermediate path reasoning ability. In addition, we introduce an information interaction method between the question and the KG to improve the accuracy of the answer prediction. Finally, we merge the global reasoning score with the item-by-item reasoning score to jointly predict the answer. Our model, compared to the </span>baseline model (EmbedKGQA), achieves an accuracy improvement of 0.5% and 2.7% on two-hop questions, and 6.2% and 4.6% on three-hop questions for the MetaQA_Full and MetaQA_Half datasets, and 1.7% on the WebQuestionSP dataset, respectively. The experimental results show that the proposed model can effectively improve the accuracy of the multi-hop KGQA model and enhance the </span>interpretability<span> of the model. We have made our model’s source code available at github: </span></span><span>https://github.com/feixiongfeixiong/GIRFM</span><svg><path></path></svg>.</p></div>","PeriodicalId":55184,"journal":{"name":"Data & Knowledge Engineering","volume":"149 ","pages":"Article 102244"},"PeriodicalIF":2.5,"publicationDate":"2023-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138430916","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The power and potentials of Flexible Query Answering Systems: A critical and comprehensive analysis 灵活的查询应答系统的力量和潜力:一个批判和全面的分析
IF 2.5 3区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-11-19 DOI: 10.1016/j.datak.2023.102246
Troels Andreasen , Gloria Bordogna , Guy De Tré , Janusz Kacprzyk , Henrik Legind Larsen , Sławomir Zadrożny

The popularity of chatbots, such as ChatGPT, has brought research attention to question answering systems, capable to generate natural language answers to user’s natural language queries. However, also in other kinds of systems, flexibility of querying, including but also going beyond the use of natural language, is an important feature. With this consideration in mind the paper presents a critical and comprehensive analysis of recent developments, trends and challenges of Flexible Query Answering Systems (FQASs). Flexible query answering is a multidisciplinary research field that is not limited to question answering in natural language, but comprises other query forms and interaction modalities, which aim to provide powerful means and techniques for better reflecting human preferences and intentions to retrieve relevant information. It adopts methods at the crossroad of several disciplines among which Information Retrieval (IR), databases, knowledge based systems, knowledge and data engineering, Natural Language Processing (NLP) and the semantic web may be mentioned. The analysis principles are inspired by the Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) framework, characterized by a top-down process, starting with relevant keywords for the topic of interest to retrieve relevant articles from meta-sources And complementing these articles with other relevant articles from seed sources Identified by a bottom-up process. to mine the retrieved publication data a network analysis is performed Which allows to present in a synthetic way intrinsic topics of the selected publications. issues dealt with are related to query answering methods Both model-based and data-driven (the latter based on either machine learning or deep learning) And to their needs for explainability and fairness to deal with big data Notably by taking into account data veracity. conclusions point out trends and challenges to help better shaping the future of the FQAS field.

ChatGPT等聊天机器人的普及引起了人们对问答系统的关注,这些系统能够对用户的自然语言查询生成自然语言答案。然而,同样在其他类型的系统中,查询的灵活性,包括但也超越了自然语言的使用,是一个重要的特征。考虑到这一点,本文对灵活查询应答系统(FQASs)的最新发展、趋势和挑战进行了批判性和全面的分析。柔性查询应答是一个多学科的研究领域,它不仅局限于自然语言的问答,还包括其他查询形式和交互方式,旨在为更好地反映人类对相关信息的偏好和意图提供强大的手段和技术。它采用了多个学科交叉的方法,其中包括信息检索(IR)、数据库、基于知识的系统、知识与数据工程、自然语言处理(NLP)和语义网。分析原则受到系统评价和元分析首选报告项目(PRISMA)框架的启发,其特点是自上而下的过程,从感兴趣的主题的相关关键字开始,从元来源检索相关文章,并通过自下而上的过程从种子来源识别其他相关文章来补充这些文章。为了挖掘检索到的出版物数据,执行网络分析,该分析允许以综合的方式呈现所选出版物的内在主题。所处理的问题涉及到基于模型和数据驱动(后者基于机器学习或深度学习)的查询回答方法,以及他们对处理大数据的可解释性和公平性的需求,特别是考虑到数据的真实性。结论指出趋势和挑战,以帮助更好地塑造FQAS领域的未来。
{"title":"The power and potentials of Flexible Query Answering Systems: A critical and comprehensive analysis","authors":"Troels Andreasen ,&nbsp;Gloria Bordogna ,&nbsp;Guy De Tré ,&nbsp;Janusz Kacprzyk ,&nbsp;Henrik Legind Larsen ,&nbsp;Sławomir Zadrożny","doi":"10.1016/j.datak.2023.102246","DOIUrl":"https://doi.org/10.1016/j.datak.2023.102246","url":null,"abstract":"<div><p>The popularity of chatbots, such as ChatGPT, has brought research attention to question answering systems, capable to generate natural language answers to user’s natural language queries. However, also in other kinds of systems, flexibility of querying, including but also going beyond the use of natural language, is an important feature. With this consideration in mind the paper presents a critical and comprehensive analysis of recent developments, trends and challenges of Flexible Query Answering Systems (FQASs). Flexible query answering is a multidisciplinary research field that is not limited to question answering in natural language, but comprises other query forms and interaction modalities, which aim to provide powerful means and techniques for better reflecting human preferences and intentions to retrieve relevant information. It adopts methods at the crossroad of several disciplines among which Information Retrieval (IR), databases, knowledge based systems, knowledge and data engineering, Natural Language Processing (NLP) and the semantic web may be mentioned. The analysis principles are inspired by the Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) framework, characterized by a top-down process, starting with relevant keywords for the topic of interest to retrieve relevant articles from meta-sources And complementing these articles with other relevant articles from seed sources Identified by a bottom-up process. to mine the retrieved publication data a network analysis is performed Which allows to present in a synthetic way intrinsic topics of the selected publications. issues dealt with are related to query answering methods Both model-based and data-driven (the latter based on either machine learning or deep learning) And to their needs for explainability and fairness to deal with big data Notably by taking into account data veracity. conclusions point out trends and challenges to help better shaping the future of the FQAS field.</p></div>","PeriodicalId":55184,"journal":{"name":"Data & Knowledge Engineering","volume":"149 ","pages":"Article 102246"},"PeriodicalIF":2.5,"publicationDate":"2023-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0169023X23001064/pdfft?md5=a520b95a7109e1b8dddc31cb9594841b&pid=1-s2.0-S0169023X23001064-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138471982","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Data & Knowledge Engineering
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1