In multi-domain spoken language understanding (MSLU), intent detection and slot filling are crucial components. While prior research has shown improvement in MSLU model performance through the integration of intent and slot features, such works typically treat multi-domain tasks as a collection of independent single-domain tasks, neglecting both intra-domain and inter-domain correlations. In this paper, we propose Piece Attention and Position-aware Embedding with the Top-k Network (PAPET), which leverages fine-grained features to capture multi-domain correlations. Specifically, we segment intents and slots into fine-grained action, domain, and attribute pieces to capture the attention with utterances. In multi-domain tasks, piece attention can effectively model both intra-domain correlations through the utilization of domain pieces, as well as inter-domain correlations by leveraging action and attribute pieces. Moreover, we introduce the top-k network and relative position-aware embedding to effectively handle multi-intent and word-to-word correlations, respectively. We perform experiments on two publicly available MSLU datasets, CrossWOZ and RiSAWOZ. The main results indicate that PAPET enhances the performances of previous SLU models, achieving improvements in joint accuracy of 2.43% and 2.19% on the respective datasets. Ablation and multi-domain experiments validate the effectiveness of PAPET in tackling the challenges of MSLU. Additional experiments further validate the effectiveness of PAPET on the MSLU task from four key perspectives: compatibility with BERT, comparative performance with large language models, computational efficiency, and error analysis.