The ever-expanding capabilities of machine learning are powered by exponentially growing complexity of deep neural network (DNN) models, requiring more energy and chip-area efficient hardware to carry out increasingly computational expensive model-inference and training tasks. Electrochemical random-access memories (ECRAMs) are developed specifically to implement efficient analog in-memory computing for these data-intensive workloads, showing some critical advantages over competing memory technologies mostly developed originally for digital electronics. ECRAMs possess the distinctive capability to switch between a very large number of memristive states with a high level of symmetry, small cycle-to-cycle variability, and low energy consumption; and they simultaneously exhibit good endurance, long data retention, fast switching speed up to nanoseconds, and verified scalability down to sub-50 nm regime, therefore holding great promise in realizing deep-learning accelerators when heterogeneously integrated with silicon-based peripheral circuits. In this review, we first examine challenges in constructing in-memory-computing accelerators and unique advantages of ECRAMs. We then critically assess the various ionic species, channel materials, and solid-state electrolytes employed in ECRAMs that influence device programming characteristics and performance metrics with their different memristive modulation and ionic transport mechanisms. Furthermore, ECRAM device engineering and integration schemes are discussed, within the context of their implementation in high-density pseudo-crossbar array microarchitectures for performing DNN inference and training with high parallelism. Finally, we offer our insights regarding major remaining obstacles and emerging opportunities of harnessing ECRAMs to realize deep-learning accelerators through material-device-circuit-architecture-algorithm co-design.