Automatic segmentation of the fundus retinal vessels and accurate classification of the arterial and venous vessels play an important role in clinical diagnosis. This article proposes a fundus retinal vascular segmentation and arteriovenous classification network that combines the adversarial training and attention mechanism to address the issues of fundus retinal arteriovenous classification error and ambiguous segmentation of fine blood vessels. It consists of three core components: discriminator, generator, and segmenter. In order to address the domain shift issue, U-Net is employed as a discriminator, and data samples for arterial and venous vessels are generated with a generator using an unsupervised domain adaption (UDA) approach. The classification of retinal arterial and venous vessels (A/V) as well as the segmentation of fine vessels is improved by adding a self-attention mechanism to improve attention to vessel edge features and the terminal fine vessels. Non-strided convolution and non-pooled downsampling methods are also used to avoid losing fine-grained information and learning less effective feature representations. The performance of multi-class blood vessel segmentation is as follows, per test results on the DRIVE dataset: F1-score (F1) has a value of 0.7496 and an accuracy of 0.9820. The accuracy of A/V categorization has increased by 1.35% when compared to AU-Net. The outcomes demonstrate that by enhancing the baseline U-Net, the strategy we suggested enhances the automated classification and segmentation of blood vessels.