Auscultation, a traditional clinical examination method using a stethoscope to quickly assess airway abnormalities, remains valuable due to its real-time, non-invasive, and easy-to-perform nature. Recent advancements in computerized respiratory sound analysis (CRSA) have provided a quantifiable approach for recording, editing, and comparing respiratory sounds, also enabling the training of artificial intelligence models to fully excavate the potential of auscultation. However, existing sound analysis models often require complex computations, leading to prolonged processing times and high calculation and memory requirements. Moreover, the limited diversity and scope of available databases limits reproducibility and robustness, mainly relying on small sample datasets primarily collected from Caucasians. In order to overcome these limitations, we developed a new Chinese adult respiratory sound database, LD-DF RSdb, using an electronic stethoscope and mobile phone. By enrolling 145 participants, 9,584 high quality recordings were collected, containing 6,435 normal sounds, 2,782 crackles, 208 wheezes, and 159 combined sounds. Subsequently, we utilized a lightweight neural network architecture, MobileNetV2, for automated categorization of the four types of respiratory sounds, achieving an appreciable overall performance with an AUC of 0.8923. This study demonstrates the feasibility and potential of using mobile phones, electronic stethoscopes, and MobileNetV2 in CRSA. The proposed method offers a convenient and promising approach to enhance overall respiratory disease management and may help address healthcare resource disparities.