{"title":"Are we fitting data or noise? Analysing the predictive power of commonly used datasets in drug-, materials-, and molecular-discovery.","authors":"Daniel Crusius, Flaviu Cipcigan, Philip Biggin","doi":"10.1039/d4fd00091a","DOIUrl":null,"url":null,"abstract":"Data-driven techniques for establishing quantitative structure property relations are a pillar of modern materials and molecular discovery. Fuelled by the recent progress in deep learning methodology and the abundance of new algorithms, it is tempting to chase benchmarks and incrementally build ever more capable machine learning (ML) models. While model evaluation has made significant progress, the intrinsic limitations arising from the underlying experimental data are often overlooked. In the chemical sciences data collection is costly, thus datasets are small and experimental errors can be significant. These limitations of such datasets affect their predictive power, a fact that is rarely considered in a quantitative way. In this study, we analyse commonly used ML datasets for regression and classification from drug discovery, molecular discovery, and materials discovery. We derived maximum and realistic performance bounds for nine such datasets by introducing noise based on estimated or actual experimental errors. We then compared the estimated performance bounds to the reported performance of leading ML models in the literature. Out of the nine datasets and corresponding ML models considered, four were identified to have reached or surpassed dataset performance limitations and thus, they may potentially be fitting noise. More generally, we systematically examine how data range, the magnitude of experimental error, and the number of data points influence dataset performance bounds. Alongside this paper, we release the Python package NoiseEstimator and provide a web- based application for computing realistic performance bounds. This study and the resulting tools will help practitioners in the field understand the limitations of datasets and set realistic expectations for ML model performance. This work stands as a reference point, offering analysis and tools to guide development of future ML models in the chemical sciences.","PeriodicalId":76,"journal":{"name":"Faraday Discussions","volume":"25 1","pages":""},"PeriodicalIF":3.3000,"publicationDate":"2024-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Faraday Discussions","FirstCategoryId":"92","ListUrlMain":"https://doi.org/10.1039/d4fd00091a","RegionNum":3,"RegionCategory":"化学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"CHEMISTRY, PHYSICAL","Score":null,"Total":0}
引用次数: 0
Abstract
Data-driven techniques for establishing quantitative structure property relations are a pillar of modern materials and molecular discovery. Fuelled by the recent progress in deep learning methodology and the abundance of new algorithms, it is tempting to chase benchmarks and incrementally build ever more capable machine learning (ML) models. While model evaluation has made significant progress, the intrinsic limitations arising from the underlying experimental data are often overlooked. In the chemical sciences data collection is costly, thus datasets are small and experimental errors can be significant. These limitations of such datasets affect their predictive power, a fact that is rarely considered in a quantitative way. In this study, we analyse commonly used ML datasets for regression and classification from drug discovery, molecular discovery, and materials discovery. We derived maximum and realistic performance bounds for nine such datasets by introducing noise based on estimated or actual experimental errors. We then compared the estimated performance bounds to the reported performance of leading ML models in the literature. Out of the nine datasets and corresponding ML models considered, four were identified to have reached or surpassed dataset performance limitations and thus, they may potentially be fitting noise. More generally, we systematically examine how data range, the magnitude of experimental error, and the number of data points influence dataset performance bounds. Alongside this paper, we release the Python package NoiseEstimator and provide a web- based application for computing realistic performance bounds. This study and the resulting tools will help practitioners in the field understand the limitations of datasets and set realistic expectations for ML model performance. This work stands as a reference point, offering analysis and tools to guide development of future ML models in the chemical sciences.
建立定量结构属性关系的数据驱动技术是现代材料和分子发现的支柱。近年来,深度学习方法论取得了长足进步,新算法层出不穷,因此,追逐基准并逐步建立能力更强的机器学习(ML)模型很有诱惑力。虽然模型评估已经取得了重大进展,但底层实验数据带来的内在局限性往往被忽视。在化学科学领域,数据收集成本很高,因此数据集很小,实验误差可能很大。这些数据集的局限性影响了它们的预测能力,而这一事实却很少得到定量考虑。在本研究中,我们分析了药物发现、分子发现和材料发现中用于回归和分类的常用 ML 数据集。通过引入基于估计或实际实验误差的噪声,我们得出了九个此类数据集的最大和实际性能界限。然后,我们将估计的性能边界与文献中报道的主要 ML 模型的性能进行了比较。在考虑的九个数据集和相应的 ML 模型中,我们发现有四个已经达到或超过了数据集的性能限制,因此,它们有可能是拟合噪声。更广泛地说,我们系统地研究了数据范围、实验误差的大小和数据点的数量如何影响数据集的性能界限。在发表这篇论文的同时,我们还发布了 Python 软件包 NoiseEstimator,并提供了一个基于网络的应用程序,用于计算现实的性能边界。这项研究和由此产生的工具将帮助该领域的从业人员了解数据集的局限性,并对 ML 模型的性能设定切合实际的期望值。这项工作可作为一个参考点,为指导化学科学领域未来 ML 模型的开发提供分析和工具。