Multigoal reinforcement learning (RL) algorithms tend to achieve and generalize over diverse goals. However, unlike single-goal agents, multigoal agents struggle to break through the exploration bottleneck with a fair share of interactions, owing to rarely reusable goal-oriented experiences with sparse goal-reaching rewards. Therefore, well-arranged behavior goals during training are essential for multigoal agents, especially in long-horizon tasks. To this end, we propose efficient multigoal exploration on the basis of maximizing the entropy of successor features and Exploring entropy-regularized successor matching, namely, E $^{2}$