首页 > 最新文献

Forensic Science International-Digital Investigation最新文献

英文 中文
If at first you don't succeed, trie, trie again: Correcting TLSH scalability claims for large-dataset malware forensics 如果一开始你没有成功,尝试,再尝试:纠正大数据集恶意软件取证的TLSH可伸缩性声明
IF 2.2 4区 医学 Q3 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-07-01 Epub Date: 2025-08-01 DOI: 10.1016/j.fsidi.2025.301922
Jordi Gonzalez
Malware analysts use Trend Micro Locality-Sensitive Hashing (TLSH) for malware similarity computation, nearest-neighbor search, and related tasks like clustering and family classification. Although TLSH scales better than many alternatives, technical limitations have limited its application to larger datasets. Using the Lean 4 proof assistant, I formalized bounds on the properties of TLSH most relevant to its scalability and identified flaws in prior TLSH nearest-neighbor search algorithms. I leveraged these formal results to design correct acceleration structures for TLSH nearest-neighbor queries. On typical analyst workloads, these structures performed one to two orders of magnitude faster than the prior state-of-the-art, allowing analysts to use datasets at least an order of magnitude larger than what was previously feasible with the same computational resources. I make all code and data publicly available.
恶意软件分析师使用趋势科技位置敏感散列(TLSH)进行恶意软件相似度计算、最近邻搜索以及聚类和家族分类等相关任务。尽管TLSH的可伸缩性比许多替代方案好,但技术限制限制了它在更大数据集上的应用。使用Lean 4证明助手,我形式化了与TLSH可伸缩性最相关的属性界限,并确定了先前TLSH最近邻搜索算法中的缺陷。我利用这些正式结果为TLSH最近邻查询设计正确的加速结构。在典型的分析师工作负载上,这些结构的执行速度比以前的最先进技术快一到两个数量级,允许分析师使用的数据集至少比以前在相同计算资源下可行的数据集大一个数量级。我将所有代码和数据公开。
{"title":"If at first you don't succeed, trie, trie again: Correcting TLSH scalability claims for large-dataset malware forensics","authors":"Jordi Gonzalez","doi":"10.1016/j.fsidi.2025.301922","DOIUrl":"10.1016/j.fsidi.2025.301922","url":null,"abstract":"<div><div>Malware analysts use Trend Micro Locality-Sensitive Hashing (TLSH) for malware similarity computation, nearest-neighbor search, and related tasks like clustering and family classification. Although TLSH scales better than many alternatives, technical limitations have limited its application to larger datasets. Using the Lean 4 proof assistant, I formalized bounds on the properties of TLSH most relevant to its scalability and identified flaws in prior TLSH nearest-neighbor search algorithms. I leveraged these formal results to design correct acceleration structures for TLSH nearest-neighbor queries. On typical analyst workloads, these structures performed one to two orders of magnitude faster than the prior state-of-the-art, allowing analysts to use datasets at least an order of magnitude larger than what was previously feasible with the same computational resources. I make all code and data publicly available.</div></div>","PeriodicalId":48481,"journal":{"name":"Forensic Science International-Digital Investigation","volume":"53 ","pages":"Article 301922"},"PeriodicalIF":2.2,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144749083","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing DFIR in orchestration Environments: Real-time forensic framework with eBPF for windows 在编排环境中增强DFIR:用于windows的eBPF实时取证框架
IF 2.2 4区 医学 Q3 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-07-01 Epub Date: 2025-08-01 DOI: 10.1016/j.fsidi.2025.301923
Philgeun Jin , Namjun Kim , Doowon Jeong
Digital forensic investigations in Windows orchestration environments face critical challenges, including the ephemeral nature of containers, dynamic scaling, and limited visibility into low-level system events. Traditional event log-based approaches often fail to capture essential kernel-level artifacts such as process creation, file I/O, and registry modifications. To overcome these limitations, this paper introduces a novel DFIR framework that leverages eBPF to enable real-time kernel-level monitoring in containerized environments. Building on Microsoft's Windows eBPF project, we developed custom eBPF extensions tailored for DFIR. Aligned with NIST SP 800-61 guidelines, the proposed framework integrates unified workflows for preparation, detection, containment, and recovery through a centralized management console. Through case studies of cryptocurrency mining, ransomware, and blue screen of death attacks, we demonstrate our framework's ability to identify malicious processes that traditional event log-based methods might miss, while confirming minimal system overhead and high compatibility with existing orchestration platforms.
Windows编排环境中的数字取证调查面临着严峻的挑战,包括容器的短暂性、动态伸缩和对底层系统事件的有限可见性。传统的基于事件日志的方法通常无法捕获基本的内核级构件,例如进程创建、文件I/O和注册表修改。为了克服这些限制,本文介绍了一种新的DFIR框架,该框架利用eBPF在容器化环境中实现实时内核级监控。基于微软的Windows eBPF项目,我们为DFIR开发了定制的eBPF扩展。根据NIST SP 800-61指南,拟议的框架通过集中管理控制台集成了准备、检测、遏制和恢复的统一工作流程。通过对加密货币挖掘、勒索软件和蓝幕死亡攻击的案例研究,我们展示了我们的框架能够识别传统的基于事件日志的方法可能错过的恶意进程,同时确认了最小的系统开销和与现有编排平台的高兼容性。
{"title":"Enhancing DFIR in orchestration Environments: Real-time forensic framework with eBPF for windows","authors":"Philgeun Jin ,&nbsp;Namjun Kim ,&nbsp;Doowon Jeong","doi":"10.1016/j.fsidi.2025.301923","DOIUrl":"10.1016/j.fsidi.2025.301923","url":null,"abstract":"<div><div>Digital forensic investigations in Windows orchestration environments face critical challenges, including the ephemeral nature of containers, dynamic scaling, and limited visibility into low-level system events. Traditional event log-based approaches often fail to capture essential kernel-level artifacts such as process creation, file I/O, and registry modifications. To overcome these limitations, this paper introduces a novel DFIR framework that leverages eBPF to enable real-time kernel-level monitoring in containerized environments. Building on Microsoft's Windows eBPF project, we developed custom eBPF extensions tailored for DFIR. Aligned with NIST SP 800-61 guidelines, the proposed framework integrates unified workflows for preparation, detection, containment, and recovery through a centralized management console. Through case studies of cryptocurrency mining, ransomware, and blue screen of death attacks, we demonstrate our framework's ability to identify malicious processes that traditional event log-based methods might miss, while confirming minimal system overhead and high compatibility with existing orchestration platforms.</div></div>","PeriodicalId":48481,"journal":{"name":"Forensic Science International-Digital Investigation","volume":"53 ","pages":"Article 301923"},"PeriodicalIF":2.2,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144749086","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Your forensic AI-assistant, SERENA: Systematic extraction and reconstruction for enhanced A2P message forensics 你的人工智能法医助理,瑟琳娜:系统提取和重建增强A2P信息取证
IF 2.2 4区 医学 Q3 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-07-01 Epub Date: 2025-08-01 DOI: 10.1016/j.fsidi.2025.301931
Jieon Kim, Byeongchan Jeong, Seungeun Park, Sangjin Lee, Jungheum Park
The integration of physical and online activities in today's hyper-connected world has blurred previously distinct boundaries. Online actions such as reservations, payments, and logins generate application-to-person (A2P) messages, which serve as valuable datasets for tracking user behavior. Although A2P messages from different service providers may vary in structure, the information within each message can be systematically normalized based on user behavior and service characteristics. However, traditional forensic tools have been unable to effectively identify and extract such forensically valuable information from these A2P messages. In this study, we leverage large language models (LLMs) combined with prompt engineering to analyze A2P messages from multiple service providers, addressing the limitations of existing forensic tools in extracting meaningful insights from unstructured or semi-structured text stored in messages and emails. The proposed methodology employs A2P messages to elaborately reconstruct user activity, enabling digital forensic investigations to identify case-relevant information with enhanced efficiency and accuracy.
在当今这个高度互联的世界里,实体活动和在线活动的融合模糊了以前截然不同的界限。在线操作(如预订、支付和登录)生成应用程序到个人(A2P)消息,这些消息可作为跟踪用户行为的有价值的数据集。尽管来自不同服务提供者的A2P消息可能在结构上有所不同,但每个消息中的信息可以基于用户行为和服务特征进行系统的规范化。然而,传统的取证工具无法有效地从这些A2P消息中识别和提取有价值的取证信息。在本研究中,我们利用大型语言模型(llm)结合即时工程来分析来自多个服务提供商的A2P消息,解决了现有取证工具在从存储在消息和电子邮件中的非结构化或半结构化文本中提取有意义的见解方面的局限性。建议的方法采用A2P消息来详细地重建用户活动,使数字取证调查能够以更高的效率和准确性识别与案件相关的信息。
{"title":"Your forensic AI-assistant, SERENA: Systematic extraction and reconstruction for enhanced A2P message forensics","authors":"Jieon Kim,&nbsp;Byeongchan Jeong,&nbsp;Seungeun Park,&nbsp;Sangjin Lee,&nbsp;Jungheum Park","doi":"10.1016/j.fsidi.2025.301931","DOIUrl":"10.1016/j.fsidi.2025.301931","url":null,"abstract":"<div><div>The integration of physical and online activities in today's hyper-connected world has blurred previously distinct boundaries. Online actions such as reservations, payments, and logins generate application-to-person (A2P) messages, which serve as valuable datasets for tracking user behavior. Although A2P messages from different service providers may vary in structure, the information within each message can be systematically normalized based on user behavior and service characteristics. However, traditional forensic tools have been unable to effectively identify and extract such forensically valuable information from these A2P messages. In this study, we leverage large language models (LLMs) combined with prompt engineering to analyze A2P messages from multiple service providers, addressing the limitations of existing forensic tools in extracting meaningful insights from unstructured or semi-structured text stored in messages and emails. The proposed methodology employs A2P messages to elaborately reconstruct user activity, enabling digital forensic investigations to identify case-relevant information with enhanced efficiency and accuracy.</div></div>","PeriodicalId":48481,"journal":{"name":"Forensic Science International-Digital Investigation","volume":"53 ","pages":"Article 301931"},"PeriodicalIF":2.2,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144748994","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improved Bitcoin simulation model and address heuristic method 改进了比特币仿真模型和地址启发式方法
IF 2.2 4区 医学 Q3 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-07-01 Epub Date: 2025-08-01 DOI: 10.1016/j.fsidi.2025.301935
Yanan Gong, Kam Pui Chow, Siu Ming Yiu
Cryptocurrency-related crimes are on the rise and have a wide-ranging impact across various areas. To effectively combat and prevent such crimes, cryptocurrency forensics, which relies on blockchain analysis, is essential. Despite advancements in Bitcoin de-anonymization techniques, several challenges persist. The absence of authentic data labels introduces uncertainty in de-anonymization results, especially in the context of address clustering. This issue is further compounded by the development of privacy-enhancing technologies that obscure address linkages, thus undermining the reliability of outcomes as forensic evidence. To address these limitations, this study focuses on Bitcoin blockchain analysis and the improvement of address clustering. Specifically, the work presents an enhanced simulation model designed to accurately simulate real Bitcoin transactions, offering a stable platform for evaluating address clustering algorithms that utilize transaction details, thereby facilitating the assessment of the admissibility of clustering results. Meanwhile, we introduce a new heuristic algorithm aimed at identifying one-time change addresses, with experimental results demonstrating that it achieves more precise clustering outcomes than existing heuristic methods. Furthermore, our blockchain analysis reveals overarching patterns and recent changes in the Bitcoin blockchain, particularly following the introduction of the BRC-20 token.
与加密货币相关的犯罪正在上升,并在各个领域产生了广泛的影响。为了有效打击和预防此类犯罪,依赖于区块链分析的加密货币取证至关重要。尽管比特币去匿名化技术取得了进步,但仍存在一些挑战。真实数据标签的缺失给去匿名化结果带来了不确定性,尤其是在地址聚类的背景下。隐私增强技术的发展使这一问题进一步复杂化,这些技术模糊了地址之间的联系,从而削弱了结果作为法医证据的可靠性。为了解决这些限制,本研究着重于比特币区块链分析和地址聚类的改进。具体来说,这项工作提出了一个增强的仿真模型,旨在准确地模拟真实的比特币交易,为评估利用交易细节的地址聚类算法提供了一个稳定的平台,从而促进了聚类结果的可接受性评估。同时,我们引入了一种新的针对一次性变更地址识别的启发式算法,实验结果表明,它比现有的启发式方法获得了更精确的聚类结果。此外,我们的区块链分析揭示了比特币区块链的总体模式和近期变化,特别是在引入BRC-20代币之后。
{"title":"Improved Bitcoin simulation model and address heuristic method","authors":"Yanan Gong,&nbsp;Kam Pui Chow,&nbsp;Siu Ming Yiu","doi":"10.1016/j.fsidi.2025.301935","DOIUrl":"10.1016/j.fsidi.2025.301935","url":null,"abstract":"<div><div>Cryptocurrency-related crimes are on the rise and have a wide-ranging impact across various areas. To effectively combat and prevent such crimes, cryptocurrency forensics, which relies on blockchain analysis, is essential. Despite advancements in Bitcoin de-anonymization techniques, several challenges persist. The absence of authentic data labels introduces uncertainty in de-anonymization results, especially in the context of address clustering. This issue is further compounded by the development of privacy-enhancing technologies that obscure address linkages, thus undermining the reliability of outcomes as forensic evidence. To address these limitations, this study focuses on Bitcoin blockchain analysis and the improvement of address clustering. Specifically, the work presents an enhanced simulation model designed to accurately simulate real Bitcoin transactions, offering a stable platform for evaluating address clustering algorithms that utilize transaction details, thereby facilitating the assessment of the admissibility of clustering results. Meanwhile, we introduce a new heuristic algorithm aimed at identifying one-time change addresses, with experimental results demonstrating that it achieves more precise clustering outcomes than existing heuristic methods. Furthermore, our blockchain analysis reveals overarching patterns and recent changes in the Bitcoin blockchain, particularly following the introduction of the BRC-20 token.</div></div>","PeriodicalId":48481,"journal":{"name":"Forensic Science International-Digital Investigation","volume":"53 ","pages":"Article 301935"},"PeriodicalIF":2.2,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144748998","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ANOC: Automated NoSQL database carver ANOC:自动NoSQL数据库雕刻器
IF 2.2 4区 医学 Q3 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-07-01 Epub Date: 2025-08-01 DOI: 10.1016/j.fsidi.2025.301929
Mahfuzul I. Nissan , James Wagner , Alexander Rasin
The increased use of NoSQL databases to store and manage data has led to a demand to include them in forensic investigations. Most NoSQL databases use diverse storage formats compared to file carving and relational database forensics. For example, some NoSQL databases manage key-value pairs using B-Trees, while others maintain hash tables or even binary protocols for serialization. Current research on NoSQL carving focuses on single-database solutions, making it impractical to develop individual carvers for every NoSQL system. This necessitates a generalized approach to forensic recovery, enabling the creation of a unified carver that can operate effectively across various NoSQL platforms.
In this research, we introduce Automated NoSQL Carver, ANOC, a novel tool designed to reconstruct database contents from raw database images without relying on the database API or logs. ANOC adapts to the unique storage characteristics of various NoSQL systems, utilizing byte-level reverse engineering to identify and parse data structures. By analyzing storage layouts algorithmically, ANOC identifies and reconstructs key-value pairs, hierarchical storage structures, and associated metadata across multiple NoSQL platforms.
Through extensive experimentation, we demonstrate ANOC's ability to recover data from four representative key-value store NoSQL databases: Berkeley DB, ZODB, etcd, and LMDB. We explore ANOC's limitations in environments where data is corrupted and RAM snapshots. Our findings establish the feasibility of a generalized carver capable of addressing the challenges posed by the diverse and evolving NoSQL ecosystem.
越来越多地使用NoSQL数据库来存储和管理数据,这导致了将它们纳入法医调查的需求。与文件雕刻和关系数据库取证相比,大多数NoSQL数据库使用不同的存储格式。例如,一些NoSQL数据库使用b -树管理键值对,而其他数据库则维护哈希表甚至二进制协议进行序列化。目前对NoSQL雕刻的研究主要集中在单数据库解决方案上,这使得为每个NoSQL系统开发单独的雕刻器是不切实际的。这就需要一种通用的取证恢复方法,从而创建一个可以跨各种NoSQL平台有效操作的统一雕刻器。在本研究中,我们介绍了自动化NoSQL雕刻器,ANOC,一个新的工具,旨在从原始数据库图像重建数据库内容,而不依赖于数据库API或日志。ANOC适应各种NoSQL系统的独特存储特性,利用字节级逆向工程来识别和解析数据结构。通过分析存储布局算法,ANOC识别和重建跨多个NoSQL平台的键值对、分层存储结构和相关元数据。通过大量的实验,我们展示了ANOC从四个代表性键值存储NoSQL数据库(Berkeley DB、ZODB等和LMDB)中恢复数据的能力。我们将探讨ANOC在数据损坏和RAM快照环境中的局限性。我们的研究结果建立了一个通用的雕刻器的可行性,能够解决多样化和不断发展的NoSQL生态系统所带来的挑战。
{"title":"ANOC: Automated NoSQL database carver","authors":"Mahfuzul I. Nissan ,&nbsp;James Wagner ,&nbsp;Alexander Rasin","doi":"10.1016/j.fsidi.2025.301929","DOIUrl":"10.1016/j.fsidi.2025.301929","url":null,"abstract":"<div><div>The increased use of NoSQL databases to store and manage data has led to a demand to include them in forensic investigations. Most NoSQL databases use diverse storage formats compared to file carving and relational database forensics. For example, some NoSQL databases manage key-value pairs using B-Trees, while others maintain hash tables or even binary protocols for serialization. Current research on NoSQL carving focuses on single-database solutions, making it impractical to develop individual carvers for every NoSQL system. This necessitates a generalized approach to forensic recovery, enabling the creation of a unified carver that can operate effectively across various NoSQL platforms.</div><div>In this research, we introduce Automated NoSQL Carver, <span>ANOC</span>, a novel tool designed to reconstruct database contents from raw database images without relying on the database API or logs. <span>ANOC</span> adapts to the unique storage characteristics of various NoSQL systems, utilizing byte-level reverse engineering to identify and parse data structures. By analyzing storage layouts algorithmically, <span>ANOC</span> identifies and reconstructs key-value pairs, hierarchical storage structures, and associated metadata across multiple NoSQL platforms.</div><div>Through extensive experimentation, we demonstrate <span>ANOC</span>'s ability to recover data from four representative key-value store NoSQL databases: Berkeley DB, ZODB, etcd, and LMDB. We explore <span>ANOC</span>'s limitations in environments where data is corrupted and RAM snapshots. Our findings establish the feasibility of a generalized carver capable of addressing the challenges posed by the diverse and evolving NoSQL ecosystem.</div></div>","PeriodicalId":48481,"journal":{"name":"Forensic Science International-Digital Investigation","volume":"53 ","pages":"Article 301929"},"PeriodicalIF":2.2,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144749092","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Leveraging memory forensics to investigate and detect illegal 3D printing activities 利用内存取证来调查和检测非法3D打印活动
IF 2.2 4区 医学 Q3 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-07-01 Epub Date: 2025-08-01 DOI: 10.1016/j.fsidi.2025.301925
Hala Ali , Andrew Case , Irfan Ahmed
As 3D printing is widely adopted across critical sectors, malicious users exploit this technology to produce illegal tools for criminal activities. The increasing availability of affordable 3D printers and the limitations of current regulations highlight the urgent need for robust forensic capabilities. While existing research focuses on the physical forensics of printed objects, the digital aspects of 3D printing forensics remain underexplored, resulting in a significant investigative gap. This paper introduces SliceSnap, a novel memory forensics framework that analyzes the volatile memory of slicing software, which is essential for converting 3D models into printer-executable G-code instructions. Our investigation focuses on Ultimaker Cura, the most popular Python-based slicing tool. By leveraging the Python garbage collector and conducting structural analysis of its objects, SliceSnap can extract the mesh data of 3D models, G-code instructions, slicing settings, detailed 3D printer metadata, and logging information. Given the potential for slicing software compromises, our framework extends beyond artifact extraction to include the complementary analysis tool, G-parser. This tool detects malicious G-code manipulations by finding the discrepancies between the original settings and those extracted from the G-code. Evaluation results demonstrated the effectiveness of SliceSnap in recovering design files and G-code of various criminal tools, such as firearms and TSA master keys, with 100% accuracy, in addition to providing detailed information about the slicing software and 3D printer. The evaluation also analyzed the temporal persistence of memory artifacts across critical stages of Cura's lifecycle. Moreover, through G-parser, the framework successfully detected the G-code manipulations conducted by our novel attack vector that targets G-code during inter-process communication within the software. Implemented as Volatility 3 plugins, SliceSnap provides investigators with automated capabilities to detect 3D printing-related criminal activities.
随着3D打印在关键领域的广泛应用,恶意用户利用这项技术生产用于犯罪活动的非法工具。越来越多的可负担的3D打印机的可用性和现行法规的局限性突出了对强大的法医能力的迫切需要。虽然现有的研究主要集中在打印对象的物理取证上,但3D打印取证的数字方面仍未得到充分探索,导致调查差距很大。本文介绍了SliceSnap,这是一种新的内存分析框架,可以分析切片软件的易失性内存,这对于将3D模型转换为打印机可执行的g代码指令至关重要。我们的调查重点是Ultimaker Cura,这是最流行的基于python的切片工具。通过利用Python垃圾收集器并对其对象进行结构分析,SliceSnap可以提取3D模型的网格数据、g代码指令、切片设置、详细的3D打印机元数据和日志信息。考虑到切片软件的潜在危害,我们的框架超越了工件提取,包括了补充分析工具G-parser。该工具通过查找原始设置与从g代码中提取的设置之间的差异来检测恶意的g代码操作。评估结果表明,SliceSnap在恢复各种犯罪工具(如枪支和TSA万能钥匙)的设计文件和g代码方面具有100%的准确性,此外还提供了有关切片软件和3D打印机的详细信息。评估还分析了在Cura生命周期的关键阶段记忆工件的时间持久性。此外,通过g解析器,该框架成功检测到我们的新型攻击向量在软件内部进程间通信期间针对g代码进行的g代码操作。作为波动性3插件,SliceSnap为调查人员提供了自动检测3D打印相关犯罪活动的功能。
{"title":"Leveraging memory forensics to investigate and detect illegal 3D printing activities","authors":"Hala Ali ,&nbsp;Andrew Case ,&nbsp;Irfan Ahmed","doi":"10.1016/j.fsidi.2025.301925","DOIUrl":"10.1016/j.fsidi.2025.301925","url":null,"abstract":"<div><div>As 3D printing is widely adopted across critical sectors, malicious users exploit this technology to produce illegal tools for criminal activities. The increasing availability of affordable 3D printers and the limitations of current regulations highlight the urgent need for robust forensic capabilities. While existing research focuses on the physical forensics of printed objects, the digital aspects of 3D printing forensics remain underexplored, resulting in a significant investigative gap. This paper introduces <em>SliceSnap</em>, a novel memory forensics framework that analyzes the volatile memory of slicing software, which is essential for converting 3D models into printer-executable G-code instructions. Our investigation focuses on Ultimaker Cura, the most popular Python-based slicing tool. By leveraging the Python garbage collector and conducting structural analysis of its objects, <em>SliceSnap</em> can extract the mesh data of 3D models, G-code instructions, slicing settings, detailed 3D printer metadata, and logging information. Given the potential for slicing software compromises, our framework extends beyond artifact extraction to include the complementary analysis tool, <em>G-parser</em>. This tool detects malicious G-code manipulations by finding the discrepancies between the original settings and those extracted from the G-code. Evaluation results demonstrated the effectiveness of <em>SliceSnap</em> in recovering design files and G-code of various criminal tools, such as firearms and TSA master keys, with 100% accuracy, in addition to providing detailed information about the slicing software and 3D printer. The evaluation also analyzed the temporal persistence of memory artifacts across critical stages of Cura's lifecycle. Moreover, through <em>G-parser</em>, the framework successfully detected the G-code manipulations conducted by our novel attack vector that targets G-code during inter-process communication within the software. Implemented as Volatility 3 plugins, <em>SliceSnap</em> provides investigators with automated capabilities to detect 3D printing-related criminal activities.</div></div>","PeriodicalId":48481,"journal":{"name":"Forensic Science International-Digital Investigation","volume":"53 ","pages":"Article 301925"},"PeriodicalIF":2.2,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144749088","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bytewise approximate matching: Evaluating common scenarios for executable files 按字节近似匹配:评估可执行文件的常见场景
IF 2.2 4区 医学 Q3 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-07-01 Epub Date: 2025-08-01 DOI: 10.1016/j.fsidi.2025.301927
Carlo Jakobs, Axel Mahr, Martin Lambertz, Mariia Rybalka, Daniel Plohmann
This research explores the application of bytewise approximate matching algorithms on executable files, evaluating the effectiveness of ssdeep, sdhash, TLSH, and MRSHv2 across various scenarios, where approximate matching seems to be a natural tool to employ. Previous works already underlined that approximate matching is often used for tasks where the algorithms have not been thoroughly and systematically evaluated. Pagani et al. (2018), in particular, highlighted the shortcomings of previous research and tried to improve current knowledge about the applicability of approximate matching in the context of executable files by evaluating typical use cases. We extend their work by taking a closer look at further common scenarios that are not covered in their article. Specifically, we examine use cases such as different versions of the same software and comparisons between on-disk and in-memory representations of the same program, both for malicious and benign software.
Our findings reveal that the considered algorithms’ performance across all evaluated scenarios was generally unsatisfactory. Notably, they struggle with size-related and localized modifications introduced during the loading stage. Furthermore, executables with no functional similarity may be mismatched due to shared byte-level similarity caused by embedded resources or inherent to certain programming languages or runtime environments. Consequently, these algorithms should be used cautiously and regarded as assisting tools rather than reliable methods for indicating similarity between executable files, as both false positives and false negatives can occur, and users should be aware of them.
Moreover, while some of the unfavored results stem from design decisions, we observed unexpected behavior in some experiments that we could trace back to issues in the reference implementations of the algorithms. After fixing the implementations, the strange effects in our results indeed disappeared. It is still an open question if and to what extent previous experiments and evaluations were affected by these issues.
本研究探讨了字节近似匹配算法在可执行文件上的应用,评估了ssdeep、sdhash、TLSH和MRSHv2在各种场景中的有效性,在这些场景中,近似匹配似乎是一种自然的使用工具。以前的工作已经强调,近似匹配通常用于算法没有被彻底和系统地评估的任务。Pagani等人(2018)特别强调了先前研究的缺点,并试图通过评估典型用例来改进当前关于可执行文件上下文中近似匹配适用性的知识。我们通过深入研究他们的文章中未涉及的其他常见场景来扩展他们的工作。具体地说,我们检查用例,例如同一软件的不同版本,以及同一程序的磁盘上和内存中表示形式之间的比较,包括恶意软件和良性软件。我们的研究结果表明,所考虑的算法在所有评估场景中的性能通常不令人满意。值得注意的是,它们与加载阶段引入的与尺寸相关的本地化修改作斗争。此外,由于嵌入式资源或某些编程语言或运行时环境固有的共享字节级相似性,没有功能相似性的可执行文件可能会不匹配。因此,应该谨慎使用这些算法,并将其视为指示可执行文件之间相似性的辅助工具,而不是可靠的方法,因为可能会出现假阳性和假阴性,用户应该意识到这一点。此外,虽然一些不受欢迎的结果源于设计决策,但我们在一些实验中观察到意想不到的行为,我们可以追溯到算法参考实现中的问题。在修复了实现之后,我们的结果中的奇怪效果确实消失了。以前的实验和评估是否以及在多大程度上受到这些问题的影响仍然是一个悬而未决的问题。
{"title":"Bytewise approximate matching: Evaluating common scenarios for executable files","authors":"Carlo Jakobs,&nbsp;Axel Mahr,&nbsp;Martin Lambertz,&nbsp;Mariia Rybalka,&nbsp;Daniel Plohmann","doi":"10.1016/j.fsidi.2025.301927","DOIUrl":"10.1016/j.fsidi.2025.301927","url":null,"abstract":"<div><div>This research explores the application of bytewise approximate matching algorithms on executable files, evaluating the effectiveness of ssdeep, sdhash, TLSH, and MRSHv2 across various scenarios, where approximate matching seems to be a natural tool to employ. Previous works already underlined that approximate matching is often used for tasks where the algorithms have not been thoroughly and systematically evaluated. Pagani et al. (2018), in particular, highlighted the shortcomings of previous research and tried to improve current knowledge about the applicability of approximate matching in the context of executable files by evaluating typical use cases. We extend their work by taking a closer look at further common scenarios that are not covered in their article. Specifically, we examine use cases such as different versions of the same software and comparisons between on-disk and in-memory representations of the same program, both for malicious and benign software.</div><div>Our findings reveal that the considered algorithms’ performance across all evaluated scenarios was generally unsatisfactory. Notably, they struggle with size-related and localized modifications introduced during the loading stage. Furthermore, executables with no functional similarity may be mismatched due to shared byte-level similarity caused by embedded resources or inherent to certain programming languages or runtime environments. Consequently, these algorithms should be used cautiously and regarded as assisting tools rather than reliable methods for indicating similarity between executable files, as both false positives and false negatives can occur, and users should be aware of them.</div><div>Moreover, while some of the unfavored results stem from design decisions, we observed unexpected behavior in some experiments that we could trace back to issues in the reference implementations of the algorithms. After fixing the implementations, the strange effects in our results indeed disappeared. It is still an open question if and to what extent previous experiments and evaluations were affected by these issues.</div></div>","PeriodicalId":48481,"journal":{"name":"Forensic Science International-Digital Investigation","volume":"53 ","pages":"Article 301927"},"PeriodicalIF":2.2,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144749090","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Detecting hidden kernel modules in memory snapshots 检测内存快照中隐藏的内核模块
IF 2.2 4区 医学 Q3 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-07-01 Epub Date: 2025-08-01 DOI: 10.1016/j.fsidi.2025.301928
Roland Nagy
Rootkit infections have plagued IT systems for several decades now. As non-trivial threats often employed by sophisticated adversaries, rootkits have received a large amount of attention, from both the industrial and academic communities. Consequently, rootkit detection has a rich literature, but most papers focus on only detecting the fact that an infection happened. They rarely offer mitigation, let alone identifying the piece of malware. We aim to solve this by not only detecting rootkit infections but by finding the malware as well. Our paper has three main goals: extend the state of the art of cross-view-based detection of Loadable Kernel Modules (the de-facto delivery method of Linux kernel rootkits), provide a memory forensics tool that implements our detection method and enables further investigation of loaded modules, and publish the dataset we used to evaluate our solution. We implemented our tool in the form of a Volatility plugin and compared it to the already existing module detection capability of Volatility. We tested them on 55 rootkit-infected memory dumps, covering 27 different versions of the Linux kernel. We also provide compatibility analysis with different kernel versions, ranging from the initial release to the latest (6.13, at the time of writing).
几十年来,Rootkit感染一直困扰着IT系统。作为复杂的对手经常使用的重要威胁,rootkit已经受到了工业界和学术界的大量关注。因此,rootkit检测有丰富的文献,但大多数论文只关注于检测感染发生的事实。他们很少提供缓解措施,更不用说识别恶意软件了。我们的目标是不仅通过检测rootkit感染,而且通过找到恶意软件来解决这个问题。我们的论文有三个主要目标:扩展可加载内核模块的基于交叉视图的检测技术(Linux内核rootkit的实际交付方法),提供一个内存取证工具来实现我们的检测方法,并允许对加载模块进行进一步调查,并发布我们用于评估我们的解决方案的数据集。我们以波动性插件的形式实现了我们的工具,并将其与波动性现有的模块检测能力进行了比较。我们在55个被rootkit感染的内存转储上测试了它们,覆盖了27个不同版本的Linux内核。我们还提供了不同内核版本的兼容性分析,范围从初始版本到最新版本(撰写本文时为6.13)。
{"title":"Detecting hidden kernel modules in memory snapshots","authors":"Roland Nagy","doi":"10.1016/j.fsidi.2025.301928","DOIUrl":"10.1016/j.fsidi.2025.301928","url":null,"abstract":"<div><div>Rootkit infections have plagued IT systems for several decades now. As non-trivial threats often employed by sophisticated adversaries, rootkits have received a large amount of attention, from both the industrial and academic communities. Consequently, rootkit detection has a rich literature, but most papers focus on only detecting the fact that an infection happened. They rarely offer mitigation, let alone identifying the piece of malware. We aim to solve this by not only detecting rootkit infections but by finding the malware as well. Our paper has three main goals: extend the state of the art of cross-view-based detection of Loadable Kernel Modules (the de-facto delivery method of Linux kernel rootkits), provide a memory forensics tool that implements our detection method and enables further investigation of loaded modules, and publish the dataset we used to evaluate our solution. We implemented our tool in the form of a Volatility plugin and compared it to the already existing module detection capability of Volatility. We tested them on 55 rootkit-infected memory dumps, covering 27 different versions of the Linux kernel. We also provide compatibility analysis with different kernel versions, ranging from the initial release to the latest (6.13, at the time of writing).</div></div>","PeriodicalId":48481,"journal":{"name":"Forensic Science International-Digital Investigation","volume":"53 ","pages":"Article 301928"},"PeriodicalIF":2.2,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144749091","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
I know where you have been last summer: Extracting privacy-sensitive information via forensic analysis of the Mercedes-Benz NTG5*2 infotainment system 我知道你去年夏天去了哪里:通过对梅赛德斯-奔驰 NTG5*2 信息娱乐系统的取证分析提取隐私敏感信息
IF 2 4区 医学 Q3 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-06-01 Epub Date: 2025-03-14 DOI: 10.1016/j.fsidi.2025.301909
Dario Stabili, Filip Valgimigli, Mirco Marchetti
Modern vehicles are equipped with In-Vehicle Infotainment (IVI) systems that offers different functions, such as typical radio and multimedia services, navigation and internet browsing. To operate properly, IVI systems have to store locally different types of data, reflecting user preferences and behaviors. If stored and managed insecurely, these data might expose sensitive information and represent a privacy risk. In this paper we address this issue by presenting a methodology for the extraction of privacy-sensitive information from the popular NTG5 COMMAND IVI system (specifically, the NTG52 version by Harman), deployed in some Mercedes-Benz vehicles from 2013 to 2019. We show that it is possible to extract information related to geographic locations and various vehicles events (such as ignition and doors opening and closing) dating back to the previous 8 months, and that these data can be cross-referenced to precisely identify the activities and habits of the driver. Moreover, we develop a novel forensic tool to automate this task.1 Given the past usage of the NTG5 system, our work might have real life implications for the privacy of millions of drivers, owners and passengers. As a final contribution, we develop a novel technique for SQLite data carving specifically designed to identify deleted data. Comparison with existing state-of-the-art tools for SQLite3 data recovery demonstrates that our approach is more effective in recovering deleted traces than general purpose tools.
现代车辆配备了车载信息娱乐系统(IVI),提供不同的功能,如典型的广播和多媒体服务,导航和互联网浏览。为了正常运行,IVI系统必须在本地存储不同类型的数据,以反映用户的偏好和行为。如果存储和管理不安全,这些数据可能会暴露敏感信息并带来隐私风险。在本文中,我们通过提出一种从流行的NTG5 COMMAND IVI系统(特别是Harman的NTG5 2版本)中提取隐私敏感信息的方法来解决这个问题,该系统从2013年到2019年部署在一些梅赛德斯-奔驰汽车上。我们的研究表明,有可能提取出过去8个月的地理位置和各种车辆事件(如点火和车门打开和关闭)相关信息,并且这些数据可以交叉引用,以精确识别驾驶员的活动和习惯。此外,我们开发了一种新的取证工具来自动化这项任务鉴于NTG5系统过去的使用情况,我们的工作可能会对数百万司机、车主和乘客的隐私产生现实影响。最后,我们开发了一种新的SQLite数据雕刻技术,专门用于识别已删除的数据。与用于SQLite3数据恢复的现有最先进工具的比较表明,我们的方法在恢复已删除的轨迹方面比通用工具更有效。
{"title":"I know where you have been last summer: Extracting privacy-sensitive information via forensic analysis of the Mercedes-Benz NTG5*2 infotainment system","authors":"Dario Stabili,&nbsp;Filip Valgimigli,&nbsp;Mirco Marchetti","doi":"10.1016/j.fsidi.2025.301909","DOIUrl":"10.1016/j.fsidi.2025.301909","url":null,"abstract":"<div><div>Modern vehicles are equipped with In-Vehicle Infotainment (IVI) systems that offers different functions, such as typical radio and multimedia services, navigation and internet browsing. To operate properly, IVI systems have to store locally different types of data, reflecting user preferences and behaviors. If stored and managed insecurely, these data might expose sensitive information and represent a privacy risk. In this paper we address this issue by presenting a methodology for the extraction of privacy-sensitive information from the popular <span><math><mi>N</mi><mi>T</mi><mi>G</mi><mn>5</mn></math></span> COMMAND IVI system (specifically, the <span><math><mi>N</mi><mi>T</mi><mi>G</mi><mn>5</mn><mo>⁎</mo><mn>2</mn></math></span> version by Harman), deployed in some Mercedes-Benz vehicles from 2013 to 2019. We show that it is possible to extract information related to geographic locations and various vehicles events (such as ignition and doors opening and closing) dating back to the previous 8 months, and that these data can be cross-referenced to precisely identify the activities and habits of the driver. Moreover, we develop a novel forensic tool to automate this task.<span><span><sup>1</sup></span></span> Given the past usage of the <span><math><mi>N</mi><mi>T</mi><mi>G</mi><mn>5</mn></math></span> system, our work might have real life implications for the privacy of millions of drivers, owners and passengers. As a final contribution, we develop a novel technique for SQLite data carving specifically designed to identify deleted data. Comparison with existing state-of-the-art tools for SQLite3 data recovery demonstrates that our approach is more effective in recovering deleted traces than general purpose tools.</div></div>","PeriodicalId":48481,"journal":{"name":"Forensic Science International-Digital Investigation","volume":"53 ","pages":"Article 301909"},"PeriodicalIF":2.0,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143620486","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Blind protocol identification using synthetic dataset: A case study on geographic protocols 基于合成数据集的协议盲识别:以地理协议为例
IF 2 4区 医学 Q3 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-06-01 Epub Date: 2025-03-13 DOI: 10.1016/j.fsidi.2025.301911
Mohammad Abbasi-Azar , Mehdi Teimouri , Mohsen Nikray
Network forensics faces major challenges, including increasingly sophisticated cyberattacks and the difficulty of obtaining labeled datasets for training AI-driven security tools. Blind Protocol Identification (BPI), essential for detecting covert data transfers, is particularly impacted by these data limitations. This paper introduces a novel and inherently scalable method for generating synthetic datasets tailored for BPI in network forensics. Our approach emphasizes feature engineering and a statistical-analytical model of feature distributions to address the scarcity and imbalance of labeled data. We demonstrate the effectiveness of this method through a case study on geographic protocols, where we train Random Forest models using only synthetic datasets and evaluate their performance on real-world traffic. This work presents a promising solution to the data challenges in BPI, enabling reliable protocol identification while maintaining data privacy and overcoming traditional data collection limitations.
网络取证面临着重大挑战,包括日益复杂的网络攻击,以及难以获得标记数据集来训练人工智能驱动的安全工具。盲协议识别(BPI)对于检测隐蔽数据传输至关重要,尤其受到这些数据限制的影响。本文介绍了一种新颖且具有固有可扩展性的方法,用于生成针对网络取证中BPI定制的合成数据集。我们的方法强调特征工程和特征分布的统计分析模型,以解决标记数据的稀缺性和不平衡性。我们通过地理协议的案例研究证明了这种方法的有效性,其中我们仅使用合成数据集训练随机森林模型,并评估其在现实世界流量中的性能。这项工作为BPI中的数据挑战提供了一个有希望的解决方案,在保持数据隐私和克服传统数据收集限制的同时实现可靠的协议识别。
{"title":"Blind protocol identification using synthetic dataset: A case study on geographic protocols","authors":"Mohammad Abbasi-Azar ,&nbsp;Mehdi Teimouri ,&nbsp;Mohsen Nikray","doi":"10.1016/j.fsidi.2025.301911","DOIUrl":"10.1016/j.fsidi.2025.301911","url":null,"abstract":"<div><div>Network forensics faces major challenges, including increasingly sophisticated cyberattacks and the difficulty of obtaining labeled datasets for training AI-driven security tools. Blind Protocol Identification (BPI), essential for detecting covert data transfers, is particularly impacted by these data limitations. This paper introduces a novel and inherently scalable method for generating synthetic datasets tailored for BPI in network forensics. Our approach emphasizes feature engineering and a statistical-analytical model of feature distributions to address the scarcity and imbalance of labeled data. We demonstrate the effectiveness of this method through a case study on geographic protocols, where we train Random Forest models using only synthetic datasets and evaluate their performance on real-world traffic. This work presents a promising solution to the data challenges in BPI, enabling reliable protocol identification while maintaining data privacy and overcoming traditional data collection limitations.</div></div>","PeriodicalId":48481,"journal":{"name":"Forensic Science International-Digital Investigation","volume":"53 ","pages":"Article 301911"},"PeriodicalIF":2.0,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143610262","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Forensic Science International-Digital Investigation
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1