[This corrects the article DOI: 10.1016/j.patter.2025.101203.].
[This corrects the article DOI: 10.1016/j.patter.2025.101203.].
While individual MRI snapshots provide valuable insights, the longitudinal progression in repeated MRIs often holds more significant diagnostic and prognostic value. However, a scarcity of longitudinal datasets, comprising paired initial and follow-up scans, hinders the application of machine learning for crucial sequential tasks. We address this gap by proposing self-conditioned diffusion with gradient manipulation (SECONDGRAM) to generate absent follow-up imaging features, enabling predictions of MRI developments over time and enriching limited datasets through imputation. SECONDGRAM builds on neural diffusion models and introduces two key contributions: self-conditioned learning to leverage much larger, unlinked datasets and gradient manipulation to combat instability and overfitting in a low-data setting. We evaluate SECONDGRAM on the UK Biobank dataset and show that it not only models MRI patterns better than existing baselines but also enhances training datasets to achieve better downstream results over naive approaches.
Large language models (LLMs) have shown strong capabilities across disciplines such as chemistry, mathematics, and medicine, yet their application in power system research remains limited, and most studies still focus on supporting specific tasks under human supervision. Here, we introduce Revive Power Systems (RePower), an autonomous LLM-driven research platform that uses a reflection-evolution strategy to independently conduct complex research in power systems. RePower assists researchers by controlling devices, acquiring data, designing methods, and evolving algorithms to address problems that are difficult to solve but easy to evaluate. Validated on three critical data-driven tasks in power systems-parameter prediction, power optimization, and state estimation-RePower outperformed traditional methods. Consistent performance improvements were observed across multiple tasks, with an average error reduction of 29.07%. For example, in the power optimization task, the error decreased from 0.00137 to 0.000825, a reduction of 39.78%. This framework facilitates autonomous discoveries, promoting innovation in power systems research.
Cellular function is defined by pathways that, in turn, are determined by distance-mediated interactions between and within subcellular organelles, protein complexes, and macromolecular structures. Multichannel super-resolution microscopy (SRM) is uniquely placed to quantify distance-mediated interactions at the nanometer scale with its ability to label individual biological targets with independent markers that fluoresce in different spectra. We review novel computational methods that quantify interaction from multichannel SRM data in both point-cloud and voxel form. We discuss in detail SRM-specific factors that can compromise interaction analysis and decompose different classes of interactions based on distinct representative cell biology use cases, the underappreciated non-linear physics of their scale, and the development of specialized methods for those use cases. An abstract mathematical model is introduced to facilitate the comparison and evaluation of interaction reconstruction methods and to quantify the computational bottlenecks. We discuss the different strategies for validation of interaction analysis results with sparse or incomplete ground-truth data. Finally, evolving trends and future directions are presented, highlighting the "multichannel gap," where interaction analysis is trailing behind the rapid increase in novel modes of multichannel SRM acquisitions.
[This corrects the article DOI: 10.1016/j.patter.2025.101206.].
The UN Convention on Biological Diversity adopted new rules for sharing benefits from publicly available genetic sequence data, also known as digital sequence information (DSI). In this Opinion, the authors describe the key elements researchers need to be aware of, address real-life questions, and explain the practical implications of these rules for research and development.
Statistical analysis of extreme events in complex engineering systems is essential for system design and reliability and resilience assessment. Due to the rarity of extreme events and the computational burden of system performance evaluation, estimating the probability of extreme failures is prohibitively expensive. Traditional methods, such as importance sampling, struggle with the high cost of deriving importance sampling densities for numerous components in large-scale systems. Here, we propose a graph learning approach, called importance sampling based on graph autoencoder (GAE-IS), to integrate a modified graph autoencoder model, termed a criticality assessor, with the cross-entropy-based importance sampling method. GAE-IS effectively decouples the criticality of components from their vulnerability to disastrous events in the workflow, demonstrating notable transferability and leading to significantly reduced computational costs of importance sampling in large-scale networks. The proposed methodology improves sampling efficiency by one to two orders of magnitude across several road networks and provides more accurate probability estimations.
High-throughput molecular profiling technologies have revolutionized molecular biology research in the past decades. One important use of molecular data is to make predictions of phenotypes and other features of the organisms using machine learning algorithms. Deep learning models have become increasingly popular for this task due to their ability to learn complex non-linear patterns. Applying deep learning to molecular profiles, however, is challenging due to the very high dimensionality of the data and relatively small sample sizes, causing models to overfit. A solution is to incorporate biological prior knowledge to guide the learning algorithm for processing the functionally related input together. This helps regularize the models and improve their generalizability and interpretability. Here, we describe three major strategies proposed to use prior knowledge in deep learning models to make predictions based on molecular profiles. We review the related deep learning architectures, including the major ideas in relatively new graph neural networks.
The concept of dignity is proliferating in ethical, legal, and policy discussions of AI, yet dignity is an elusive concept with multiple philosophical interpretations. The authors argue that the unspecific and uncritical employment of the notion of dignity can be counterproductive for AI ethics.

