Color Doppler echocardiography enables visualization of blood flow within the heart. However, the limited frame rate impedes the quantitative assessment of blood velocity throughout the cardiac cycle, thereby compromising a comprehensive analysis of ventricular filling. Concurrently, deep learning is demonstrating promising outcomes in post-processing of echocardiographic data for various applications. This work explores the use of deep learning models for intracardiac Doppler velocity estimation from a reduced number of filtered I/Q signals. We used a supervised learning approach by simulating patient-based cardiac color Doppler acquisitions and proposed data augmentation strategies to enlarge the training dataset. We implemented architectures based on convolutional neural networks. In particular, we focused on comparing the U-Net model and the recent ConvNeXt models, alongside assessing real-valued versus complex-valued representations. We found that both models outperformed the state-of-the-art autocorrelator method, effectively mitigating aliasing and noise. We did not observe significant differences between the use of real and complex data. Finally, we validated the models on in vitro and in vivo experiments. All models produced quantitatively comparable results to the baseline and were more robust to noise. ConvNeXt emerged as the sole model to achieve high-quality results on in vivo aliased samples. These results demonstrate the interest of supervised deep learning methods for Doppler velocity estimation from a reduced number of acquisitions.
Ultrasound Localization Microscopy (ULM), an emerging medical imaging technique, effectively resolves the classical trade-off between resolution and penetration inherent in traditional ultrasound imaging, opening up new avenues for noninvasive observation of the microvascular system. However, traditional microbubble tracking methods encounter various practical challenges. These methods typically entail multiple processing stages, including intricate steps like pairwise correlation and trajectory optimization, rendering real-time applications unfeasible. Furthermore, existing deep learning-based tracking techniques neglect the temporal aspects of microbubble motion, leading to ineffective modeling of their dynamic behavior. To address these limitations, this study introduces a novel approach called the Gated Recurrent Unit (GRU)-based Multitasking Temporal Neural Network (GRU-MT). GRU-MT is designed to simultaneously handle microbubble trajectory tracking and trajectory optimization tasks. Additionally, we enhance the nonlinear motion model initially proposed by Piepenbrock et al. to better encapsulate the nonlinear motion characteristics of microbubbles, thereby improving trajectory tracking accuracy. In this study, we perform a series of experiments involving network layer substitutions to systematically evaluate the performance of various temporal neural networks, including Recurrent Neural Networks (RNN), Long Short-Term Memory (LSTM), GRU, Transformer, and its bidirectional counterparts, on the microbubble trajectory tracking task. Concurrently, the proposed method undergoes qualitative and quantitative comparisons with traditional microbubble tracking techniques. The experimental results demonstrate that GRU-MT exhibits superior nonlinear modeling capabilities and robustness, both in simulation and in vivo dataset. Additionally, it achieves reduced trajectory tracking errors in shorter time intervals, underscoring its potential for efficient microbubble trajectory tracking. Model code is open-sourced at https://github.com/zyt-Lib/GRU-MT.