Introduction: Wearable exoskeletons are emerging technologies for providing movement assistance and rehabilitation for people with motor disorders. In this study, we focus on the specific gait pathology dropfoot, which is common after a stroke. Dropfoot makes it difficult to achieve foot clearance during swing and heel contact at early stance and often necessitates compensatory movements.
Methods: We developed a soft ankle exoskeleton consisting of actuation and transmission systems to assist two degrees of freedom simultaneously: dorsiflexion and eversion, then performed several proof-of-concept experiments on non-disabled persons. The actuation system consists of two motors worn on a waist belt. The transmission system provides assistive force to the medial and lateral sides of the forefoot via Bowden cables. The coupling design enables variable assistance of dorsiflexion and inversion at the same time, and a force-free controller is proposed to compensate for device resistance. We first evaluated the performance of the exoskeleton in three seated movement tests: assisting dorsiflexion and eversion, controlling plantarflexion, and compensating for device resistance, then during walking tests. In all proof-of-concept experiments, dropfoot tendency was simulated by fastening a weight to the shoe over the lateral forefoot.
Results: In the first two seated tests, errors between the target and the achieved ankle joint angles in two planes were low; errors of <1.5° were achieved in assisting dorsiflexion and/or controlling plantarflexion and of <1.4° in assisting ankle eversion. The force-free controller in test three significantly compensated for the device resistance during ankle joint plantarflexion. In the gait tests, the exoskeleton was able to normalize ankle joint and foot segment kinematics, specifically foot inclination angle and ankle inversion angle at initial contact and ankle angle and clearance height during swing.
Discussion: Our findings support the feasibility of the new ankle exoskeleton design in assisting two degrees of freedom at the ankle simultaneously and show its potential to assist people with dropfoot and excessive inversion.
Utilizing deep features from electroencephalography (EEG) data for emotional music composition provides a novel approach for creating personalized and emotionally rich music. Compared to textual data, converting continuous EEG and music data into discrete units presents significant challenges, particularly the lack of a clear and fixed vocabulary for standardizing EEG and audio data. The lack of this standard makes the mapping relationship between EEG signals and musical elements (such as rhythm, melody, and emotion) blurry and complex. Therefore, we propose a method of using clustering to create discrete representations and using the Transformer model to reverse mapping relationships. Specifically, the model uses clustering labels to segment signals and independently encodes EEG and emotional music data to construct a vocabulary, thereby achieving discrete representation. A time series dictionary was developed using clustering algorithms, which more effectively captures and utilizes the temporal and structural relationships between EEG and audio data. In response to the insensitivity to temporal information in heterogeneous data, we adopted a multi head attention mechanism and positional encoding technology to enable the model to focus on information in different subspaces, thereby enhancing the understanding of the complex internal structure of EEG and audio data. In addition, to address the mismatch between local and global information in emotion driven music generation, we introduce an audio masking prediction loss learning method. Our method generates music that Hits@20 On the indicator, a performance of 68.19% was achieved, which improved the score by 4.9% compared to other methods, indicating the effectiveness of this method.
Real-world robotic operations often face uncertainties that can impede accurate control of manipulators. This study proposes a recurrent neural network (RNN) combining kinematic and dynamic models to address this issue. Assuming an unknown mass matrix, the proposed method enables effective trajectory tracking for manipulators. In detail, a kinematic controller is designed to determine the desired joint acceleration for a given task with error feedback. Subsequently, integrated with the kinematics controller, the RNN is proposed to combine the robot's dynamic model and a mass matrix estimator. This integration allows the manipulator system to handle uncertainties and synchronously achieve trajectory tracking effectively. Theoretical analysis demonstrates the learning and control capabilities of the RNN. Simulative experiments conducted on a Franka Emika Panda manipulator, and comparisons validate the effectiveness and superiority of the proposed method.
Complex robotic systems, such as humanoid robot hands, soft robots, and walking robots, pose a challenging control problem due to their high dimensionality and heavy non-linearities. Conventional model-based feedback controllers demonstrate robustness and stability but struggle to cope with the escalating system design and tuning complexity accompanying larger dimensions. In contrast, data-driven methods such as artificial neural networks excel at representing high-dimensional data but lack robustness, generalization, and real-time adaptiveness. In response to these challenges, researchers are directing their focus to biological paradigms, drawing inspiration from the remarkable control capabilities inherent in the human body. This has motivated the exploration of new control methods aimed at closely emulating the motor functions of the brain given the current insights in neuroscience. Recent investigation into these Brain-Inspired control techniques have yielded promising results, notably in tasks involving trajectory tracking and robot locomotion. This paper presents a comprehensive review of the foremost trends in biomimetic brain-inspired control methods to tackle the intricacies associated with controlling complex robotic systems.
Aiming at the problems of traditional image super-resolution reconstruction algorithms in the image reconstruction process, such as small receptive field, insufficient multi-scale feature extraction, and easy loss of image feature information, a super-resolution reconstruction algorithm of multi-scale dilated convolution network based on dilated convolution is proposed in this paper. First, the algorithm extracts features from the same input image through the dilated convolution kernels of different receptive fields to obtain feature maps with different scales; then, through the residual attention dense block, further obtain the features of the original low resolution images, local residual connections are added to fuse multi-scale feature information between multiple channels, and residual nested networks and jump connections are used at the same time to speed up deep network convergence and avoid network degradation problems. Finally, deep network extraction features, and it is fused with input features to increase the nonlinear expression ability of the network to enhance the super-resolution reconstruction effect. Experimental results show that compared with Bicubic, SRCNN, ESPCN, VDSR, DRCN, LapSRN, MemNet, and DSRNet algorithms on the Set5, Set14, BSDS100, and Urban100 test sets, the proposed algorithm has improved peak signal-to-noise ratio and structural similarity, and reconstructed images. The visual effect is better.
Introduction: Unmanned aerial vehicles (UAVs) are widely used in various computer vision applications, especially in intelligent traffic monitoring, as they are agile and simplify operations while boosting efficiency. However, automating these procedures is still a significant challenge due to the difficulty of extracting foreground (vehicle) information from complex traffic scenes.
Methods: This paper presents a unique method for autonomous vehicle surveillance that uses FCM to segment aerial images. YOLOv8, which is known for its ability to detect tiny objects, is then used to detect vehicles. Additionally, a system that utilizes ORB features is employed to support vehicle recognition, assignment, and recovery across picture frames. Vehicle tracking is accomplished using DeepSORT, which elegantly combines Kalman filtering with deep learning to achieve precise results.
Results: Our proposed model demonstrates remarkable performance in vehicle identification and tracking with precision of 0.86 and 0.84 on the VEDAI and SRTID datasets, respectively, for vehicle detection.
Discussion: For vehicle tracking, the model achieves accuracies of 0.89 and 0.85 on the VEDAI and SRTID datasets, respectively.