Precise matching of visual features between frames is crucial for the robustness and accuracy of visual odometry and SLAM (Simultaneous Localization and Mapping) systems. However, factors such as complex illumination and texture variations may cause significant errors in feature correspondences that will degrade the accuracy of visual localization. In this paper, we utilize the feature descriptor to validate and assess the correspondence quality of the optical flow algorithm, and establish the information matrix of visual measurements, which is used for improving the accuracy of visual localization in the nonlinear optimization framework. This proposed approach of optical flow quality assessment leverages the complementary advantages of the optical flow algorithm and descriptor matching, and it is applicable to other visual odometry or SLAM systems that use the optical flow algorithm for feature correspondence. We first demonstrate through simulation experiments the statistical correlation between optical flow error and descriptor Hamming distance. Subsequently, based on the statistical correlation, the optical flow tracking error is quantitatively estimated using the descriptor Hamming distance. As a result, features with large tracking errors are rejected as outliers, and other features are remained with an adequate error model, i.e. information matrix in the nonlinear optimization, which corresponds with the visual tracking error. Furthermore, rather than direct tracking error between the initial observation frame and the current frame, we proposed the cumulative tracking error for successive frames (CTE-SF) to improve the efficiency of descriptor extraction in successive visual tracking, as it requires no the construction of multi-scale image pyramids. We evaluated the proposed solution using the open datasets and our developed in-house embedded positioning device. The results indicate that the proposed solution can improve the accuracy of visual odometry systems utilizing the optical flow algorithm for feature correspondence (e.g., VINS-Mono) by approximately 10%–50%, while requiring only an 11% increase in computational resource consumption. We have made our implementation open-source, available at: https://github.com/Jett64/VINS-with-Error-Model.