VizDoom is a flexible platform for researching reinforcement learning (RL) within the Doom game environment. This research article analyzes the effectiveness of the proximal policy optimization (PPO) algorithm in the VizDoom Deadly Corridor scenario. The PPO algorithm has not been adequately assessed before in a first-person shooter-based research environment, specifically VizDoom. Thus, this article applied reward shaping and curriculum learning techniques to improve the algorithm's performance in complex and challenging scenarios of the first-person shooter game Doom. The goal is to analyze and evaluate the effectiveness of the PPO algorithm successfully in the scenario of the three-dimensional VizDoom environment. The agent has a record score up to 734 on the first hard level, 1576 on the second hard level, 1920 on the third hard level, 2280 on the fourth hard level, and 1605 on the fifth hard level which is the highest difficult level of the scenario. The results are compared to provide valuable insights for researchers in optimizing reinforcement learning agents in games. The study also discusses the potential of the Doom game for research in artificial intelligence. The results of this study can be used to enhance the performance of reinforcement learning algorithms in game-based environments.