The era of artificial intelligence of things (AIoT) brings huge challenges on edge visual processing systems under strict processing latency, cost and energy budgets. The emergence of computationally efficient biological spiking neural networks (SNN) and event-driven neuromorphic architecture in recent years have fostered a computing paradigm shift to address these challenges. In this paper, we propose a neuromorphic processor architecture for a multi-layer convolutional SNN (codenamed HMAX SNN model) inspired by human visual cortex hierarchy. The main contributions of this work include: 1) It proposes a fully event-driven, modular, configurable and scalable neuromorphic architecture allowing for flexible tradeoffs among implementation cost, processing speed and visual recognition accuracy with multi-layer convolutional SNNs. 2) It proposes a run-time reconfigurable learning engine enabling fast on-chip unsupervised spike-timing dependent plasticity (STDP) learning for the feature-extraction convolutional layers and also supervised STDP learning for the feature-classification FC layer, in a time-multiplexing way. These techniques greatly improve on-chip learning accuracies beyond 97 % on the Modified National Institute of Standards and Technology database (MNIST) images for the first time among existing edge neuromorphic systems, at reasonable computational and memory costs. Our hardware processor architecture was prototyped on a low-cost Zedboard Zynq-7020 Field-Programmable Gate Array (FPGA) device, and validated on the MNIST, Fashion-MNIST, Olivetti Research Laboratory (ORL) human faces and ETH-80 image datasets. The experimental results demonstrate that the proposed neuromorphic architecture can achieve comparably high on-chip learning accuracy, high inference throughput and high energy efficiency using relatively fewer hardware resource consumptions. We anticipate that the HMAX SNN processor can potentially enhance deployments of edge neuromorphic processors in more practical edge applications.