Edge computing is an effective measure for addressing the high demand for computing power on the end-side due to dense task distribution in the mobile Internet. In the case of limited device resources and computing power, how to optimize the task offloading decision has become an important issue for improving computing efficiency. We improve the heuristic algorithm by combining the characteristics of intensive tasks, and optimize the task offloading decision at a lower cost. To overcome the limitation of requiring a large amount of real-time information, we utilize the RL algorithm and design a new reward function to enable the agent to learn from its interactions with the environment. Aiming at the poor performance of the system in the uncertain initial environment, we propose a Q-learning scheme based on the Softmax strategy for the multi-layer agent RL framework. The offloading process is optimized by coordinating agents with different views of the environment between each layer, while balancing the exploration and utilization relationship to improve the performance of the algorithm in a more complex dynamic environment. The experimental results show that in the mobile environment with high device density and diverse tasks, the proposed algorithm achieves significant improvements in key indicators such as task success rate, waiting time, and energy consumption. In particular, it exhibits excellent robustness and efficiency advantages in complex dynamic environments, far exceeding the current benchmark algorithm.