The focus of this research is the creation of a deep reinforcement learning approach to tackle the challenging task of robotic gripping through tactile sensor data feedback. Leveraging deep reinforcement learning, we have sidestepped the necessity to design features manually, which simplifies the issue and allows the robot to acquire gripping strategies via trial-and-error learning. Our technique utilizes an off-policy reinforcement learning model, integrating deep deterministic policy gradient structure and twin delayed attributes to facilitate maximum precision in gripping floating items. We have formulated a comprehensive reward function to provide the agent with precise, insightful feedback to facilitate the learning of the gripping task. The training of our model was executed solely in a simulated environment using the PyBullet framework and did not require demonstrations or pre-existing knowledge of the task. We examined a gripping task with a 3-finger Robotiq gripper for a case study, where the gripper had to approach a floating object, pursue it, and eventually grip it. This training methodology in a simulated setting allowed us to experiment with various scenarios and conditions, thereby enabling the agent to develop a resilient and adaptable grip policy.