Accurate detection of rail surface cracks is essential but also tricky because of the noise, low contrast, and density inhomogeneity. In this paper, to deal with the complex situations in rail surface crack detection, we propose modulated deformable convolution based on a graph convolution network named MDCGCN. The MDCGCN is a novel convolution that calculates the offsets and modulation scalars of the modulated deformable convolution by conducting the graph convolution network on a feature map. The MDCGCN improves the performance of different networks in rail surface crack detection, harming the inference speed slightly. Finally, we demonstrate our methods’ numerical accuracy, computational efficiency, and effectiveness on the public segmentation dataset RSDD and our self-built detection dataset SEU-RSCD and explore an appropriate network structure in the baseline network UNet with the MDCGCN.
Transformer-based architectures represent the state-of-the-art in image captioning. Due to its natural parallel internal structure, it cannot be aware of the order of inputting tokens, so the positional encoding becomes an indispensable component of Transformer-based models. However, most of the existing absolute positional encodings (APE) have certain limitations for image captioning. Their spatial positional features are predefined and cannot been well generalized to other forms of data, such as visual data. Meanwhile, each positional features are decoupled from each other and lack internal correlation, therefore which affects the accuracy of spatial position context representation of visual or text semantic to a certain extent. Therefore, we propose a memory positional encoding (MPE), which has generalization ability that can be applied to both the visual encoder and the sequence decoder of the image captioning models. In MPE, each positional feature is recursively generated by the learnable network with memory function, making the current generated positional features effectively inherit the genetic information of the previous positions. In addition, existing positional encodings provide positional features with fixed value and scale, that means, they provide the same positional encoding for different inputs, which is unreasonable. Thus, to address the previous issues of scale and value of current positional encoding methods in practical applications, we further explore dynamic memory positional encoding (DMPE) based on MPE. DMPE dynamically adjusts and generates positional features based on different input to provide them with unique positional representation. Extensive experiments on the MSCOCO validate the effectiveness of MPE and DMPE.
Super-resolution is the process of obtaining a high-resolution (HR) image from one or more low-resolution (LR) images. Single image super-resolution (SISR) deals with one LR image while multi-frame super-resolution (MFSR) employs several LR ones to reach the HR output. MFSR pipeline consists of alignment, fusion, and reconstruction. We conduct a theoretical analysis using sparse coding (SC) and iterative shrinkage-thresholding algorithm to fill the gap of mathematical justification in the execution order of the optimal MFSR pipeline. Our analysis recommends executing alignment and fusion before the reconstruction stage (whether through deconvolution or SISR techniques). The suggested order ensures enhanced performance in terms of peak signal-to-noise ratio and structural similarity index. The optimal pipeline also reduces computational complexity compared to intuitive approaches that apply SISR to each input LR image. Also, we demonstrate the usefulness of SC in analysis of computer vision tasks such as MFSR, leveraging the sparsity assumption in natural images. Simulation results support the findings of our theoretical analysis, both quantitatively and qualitatively.
Numerous new underwater image enhancement methods have been proposed to correct color and enhance the contrast. Although these methods have achieved satisfactory enhancement results in some respects, few have taken into account the effect of the raw image illumination distribution on the enhancement results, often leading to oversaturation or undersaturation. To solve these problems, an underwater image enhancement network guided by brightness mask with multi-attention embedding, called BMGMANet, is designed. Specifically, considering that different regions in the underwater images have different degradation degrees, which can be implicitly reflected by a brightness mask characterizing the image illumination distribution, a decoder network guided by a reverse brightness mask is designed to enhance the dark regions while suppressing excessive enhancement of the bright regions. In addition, a triple-attention module is designed to further enhance the contrast of the underwater image and recover more details. Extensive comparative experiments demonstrate that the enhancement results of our network outperform those of existing state-of-the-art methods. Furthermore, additional experiments also prove that our BMGMANet can effectively enhance the non-uniform illumination underwater images and improve the performance of saliency object detection in underwater images.
In recent years, numerous efficient schemes that employ deep neural networks have been developed for the task of image hashing. However, not much attention is paid to enhancing the performance and robustness of these deep hashing networks, when the input images do not possess high spatial resolution and visual quality. This is a critical problem, as often accessing high-quality high-resolution images is not guaranteed in real-life applications. In this paper, we propose a novel method for the task of joint image upsampling and hashing, that uses a three-stage design. Specifically, in the first two stages of the proposed scheme, we obtain two deep neural networks, each of which is individually trained for the task of image super resolution and image hashing, respectively. We then fine-tune the two deep networks thus obtained by using the ideas of representation learning and alternating optimization process, in order to produce a set of optimal parameters for the task of joint image upsampling and hashing. The effectiveness of the various ideas utilized for designing the proposed method is demonstrated by performing different experimentations. It is shown that the proposed scheme is able to outperform the state-of-the-art image super resolution and hashing methods, even when they are trained simultaneously in a joint end-to-end manner.
To render a spherical (360° or omnidirectional) image on planar displays, a 2D image - called as viewport - must be obtained by projecting a sphere region on a plane, according to the user's viewing direction and a predefined field of view (FoV). However, any sphere to plan projection introduces geometric distortions, such as object stretching and/or bending of straight lines, which intensity increases with the considered FoV. In this paper, a fully automatic content-aware projection is proposed, aiming to reduce the geometric distortions when high FoVs are used. This new projection is based on the Pannini projection, whose parameters are firstly globally optimized according to the image content, followed by a local conformality improvement of relevant viewport objects. A crowdsourcing subjective test showed that the proposed projection is the most preferred solution among the considered state-of-the-art sphere to plan projections, producing viewports with a more pleasant visual quality.