Federated Learning (FL) is an innovative approach that enables multiple parties to collaboratively train a machine learning model while keeping their data private. This method significantly enhances data security as it avoids sharing raw data among participants. However, a critical challenge in FL is the potential leakage of sensitive information through shared model updates. To address this, differential privacy techniques, which add random noise to data or model updates, are used to safeguard individual data points from being inferred. Traditional approaches to differential privacy typically utilize a fixed privacy budget, which may not account for the varying sensitivity of data, potentially affecting model accuracy. To overcome these limitations, we introduce HierFedPDP, a new FL framework that optimizes data privacy and model performance. HierFedPDP employs a three-tier client–edge–cloud architecture, maximizing the use of edge computing to alleviate the computational load on the central server. At the core of HierFedPDP is a personalized local differential privacy mechanism that tailors privacy settings based on data sensitivity, thereby enhancing data protection while maintaining high utility. Our framework not only fortifies privacy but also improves model accuracy. Specifically, experiments on the MNIST dataset show that HierFedPDP outperforms existing models, increasing accuracy by 0.84% to 2.36%, and CIFAR-10 has also achieved effective improvements. This research advances the capabilities of FL in protecting data privacy and provides valuable insights for designing more efficient distributed learning systems.