In today's information age, table images play a crucial role in storing structured information, making table image recognition technology an essential component in many fields. However, accurately recognizing the structure and text content of various complex table images has remained a challenge. Recently, large language models (LLMs) have demonstrated exceptional capabilities in various natural language processing tasks. Therefore, applying LLMs to the correction tasks of structure and text content after table image recognition presents a novel solution. This paper introduces a new method, TableGPT, which combines table recognition with LLMs and develops a specialized multimodal agent to enhance the effectiveness of table image recognition. Our approach is divided into four stages. In the first stage, TableGPT_agent initially evaluates whether the input is a table image and, upon confirmation, uses algorithms such as the transformer for preliminary recognition. In the second stage, the agent converts the recognition results into HTML format and autonomously assesses whether corrections are needed. If corrections are needed, the data are input into a trained LLM to achieve more accurate table recognition and optimization. In the third stage, the agent evaluates user satisfaction through feedback and applies superresolution algorithms to low-quality images, as this is often the main reason for user dissatisfaction. Finally, the agent inputs both the enhanced and original images into the trained model, integrating the information to obtain the optimal table text representation. Our research shows that trained LLMs can effectively interpret table images, improving the Tree Edit Distance Similarity (TEDS) score by an average of 4% even when based on the best current table recognition methods, across both public and private datasets. They also demonstrate better performance in correcting structural and textual errors. We also explore the impact of image superresolution technology on low-quality table images. Combined with the LLMs, our TEDS score significantly increased by 54%, greatly enhancing the recognition performance. Finally, by leveraging agent technology, our multimodal model improved table recognition performance, with the TEDS score of TableGPT_agent surpassing that of GPT-4 by 34%.