Significant advances have been made to improve control and to provide sensory functions for bionic hands. However, great challenges remain, limiting wide acceptance of bionic hands due to inadequate bidirectional neural compatibility with human users. Recent research has brought to light the necessity for matching neuromechanical behaviors between the prosthesis and the sensorimotor system of amputees. A novel approach to achieving greater neural compatibility leverages the technology of biorealistic modeling with real-time computation. These studies have demonstrated a promising outlook that this unique approach may transform the performance of hand prostheses. Simultaneously, a noninvasive technique of somatotopic sensory feedback has been developed based on evoked tactile sensation (ETS) for conveying natural, intuitive, and digit-specific tactile information to users. This paper reports the recent work on these two important aspects of sensorimotor functions in prosthetic research. A background review is presented first on the state of the art of bionic hand and the various techniques to deliver tactile sensory information to users. Progress in developing the novel biorealistic hand prosthesis and the technique of noninvasive ETS feedback is then highlighted. Finally, challenges to future development of the biorealistic hand prosthesis and implementing the ETS feedback are discussed with respect to shaping a next-generation hand prosthesis.
The rapid development of diagnostic technologies in healthcare is leading to higher requirements for physicians to handle and integrate the heterogeneous, yet complementary data that are produced during routine practice. For instance, the personalized diagnosis and treatment planning for a single cancer patient relies on various images (e.g. radiology, pathology and camera images) and non-image data (e.g. clinical data and genomic data). However, such decision-making procedures can be subjective, qualitative, and have large inter-subject variabilities. With the recent advances in multimodal deep learning technologies, an increasingly large number of efforts have been devoted to a key question: how do we extract and aggregate multimodal information to ultimately provide more objective, quantitative computer-aided clinical decision making? This paper reviews the recent studies on dealing with such a question. Briefly, this review will include the (a) overview of current multimodal learning workflows, (b) summarization of multimodal fusion methods, (c) discussion of the performance, (d) applications in disease diagnosis and prognosis, and (e) challenges and future directions.
In recent years, soft robotics technologies enabled the development of a new generation of biomedical devices. The combination of elastomeric materials with tunable properties and muscle-like motions paved the way toward more realistic phantoms and innovative soft active implants as artificial organs or assistive mechanisms. This review collects the most relevant studies in the field, giving some insights about their distribution in the past 10 years, their level of development and opening a discussion about the most commonly employed materials and actuating technologies. The reported results show some promising trends, highlighting that the soft robotics approach can help replicate specific material characteristics in the case of static or passive organs but also reproduce peculiar natural motion patterns for the realization of dynamic phantoms or implants. At the same time, some important challenges still need to be addressed. However, by joining forces with other research fields and disciplines, it will be possible to get one step closer to the development of complex, active, self-sensing and deformable structures able to replicate as closely as possible the typical properties and functionalities of our natural body organs.

