This study evaluates the role and performance of microphysical processes and parameterizations in simulating two distinct rainfall events over Nigeria. Four microphysics (MP) schemes in the Weather Research and Forecasting (WRF) model—Goddard, Morrison, Thompson, and WDM6—were utilized. The analysis focused on mean rainfall rates, hydrometeor pathways, and spatial rainfall accumulation amounts and patterns.
Results indicate that while most MP schemes underestimated the mean rainfall rate, they reasonably captured the spatial distributions in both events. Based on statistical metrics of 24-h accumulated rainfall, the Goddard scheme produced the lowest mean absolute bias (MAB) and the highest rainfall detection ability (POD and TS) for the June rainfall event. For the February event, the Morrison scheme exhibited the least absolute bias and achieved high POD and TS values. The differences in rainfall production among the MP schemes were primarily attributed to variations in the growth rates of rainwater hydrometeors within the hydrometeor pathways (HPs), while rainfall duration was influenced by consistent collision and coalescence of cloud droplets. Excessive cloud water production also contributed to delays in rainwater formation, leading to reduced simulated rainfall. Additionally, the rapid melting of large graupel mass significantly affected the performance of different schemes in simulating rainfall. Furthermore, variations in low to mid-tropospheric vertical velocity and surface parameters (such as temperature and specific humidity) were shown to significantly control microphysical processes and, consequently, impact rainfall production.
Overall, the analysis suggests that more sophisticated MP schemes do not necessarily provide better simulations of precipitable hydrometeor pathways.