Large language models (LLMs) are increasing in capability and popularity, propelling their application in new domains—including as replacements for human participants in computational social science, user testing, annotation tasks and so on. In many settings, researchers seek to distribute their surveys to a sample of participants that are representative of the underlying human population of interest. This means that to be a suitable replacement, LLMs will need to be able to capture the influence of positionality (that is, the relevance of social identities like gender and race). However, we show that there are two inherent limitations in the way current LLMs are trained that prevent this. We argue analytically for why LLMs are likely to both misportray and flatten the representations of demographic groups, and then empirically show this on four LLMs through a series of human studies with 3,200 participants across 16 demographic identities. We also discuss a third limitation about how identity prompts can essentialize identities. Throughout, we connect each limitation to a pernicious history of epistemic injustice against the value of lived experiences that explains why replacement is harmful for marginalized demographic groups. Overall, we urge caution in use cases in which LLMs are intended to replace human participants whose identities are relevant to the task at hand. At the same time, in cases where the benefits of LLM replacement are determined to outweigh the harms (for example, engaging human participants may cause them harm, or the goal is to supplement rather than fully replace), we empirically demonstrate that our inference-time techniques reduce—but do not remove—these harms.
Drug–target interaction (DTI) prediction is a challenging albeit essential task in drug repurposing. Learning on graph models has drawn special attention as they can substantially reduce drug repurposing costs and time commitment. However, many current approaches require high-demand additional information besides DTIs that complicates their evaluation process and usability. Additionally, structural differences in the learning architecture of current models hinder their fair benchmarking. In this work, we first perform an in-depth evaluation of current DTI datasets and prediction models through a robust benchmarking process and show that DTI methods based on transductive models lack generalization and lead to inflated performance when traditionally evaluated, making them unsuitable for drug repurposing. We then propose a biologically driven strategy for negative-edge subsampling and uncovered previously unknown interactions via in vitro validation, missed by traditional subsampling. Finally, we provide a toolbox from all generated resources, crucial for fair benchmarking and robust model design.
Recently, many artificial intelligence (AI)-powered protein–ligand docking and scoring methods have been developed, demonstrating impressive speed and accuracy. However, these methods often neglected the physical plausibility of the docked complexes and their efficacy in virtual screening (VS) projects. Therefore, we conducted a comprehensive benchmark analysis of four AI-powered and four physics-based docking tools and two AI-enhanced rescoring methods. We initially constructed the TrueDecoy set, a dataset on which the redocking experiments revealed that KarmaDock and CarsiDock surpassed all physics-based tools in docking accuracy, whereas all physics-based tools notably outperformed AI-based methods in structural rationality. The low physical plausibility of docked structures generated by the top AI method, CarsiDock, mainly stems from insufficient intermolecular validity. The VS results on the TrueDecoy set highlight the effectiveness of RTMScore as a rescore function, and Glide-based methods achieved the highest enrichment factors among all docking tools. Furthermore, we created the RandomDecoy set, a dataset that more closely resembles real-world VS scenarios, where AI-based tools obviously outperformed Glide. Additionally, we found that the employed ligand-based postprocessing methods had a weak or even negative impact on optimizing the conformations of docked complexes and enhancing VS performance. Finally, we proposed a hierarchical VS strategy that could efficiently and accurately enrich active molecules in large-scale VS projects.