While most of the world’s knowledge exists in a positive and affirmative form, negative knowledge also plays a significant role by showing what is not true or what not to think, and has yet been largely overlooked. Existing negative commonsense knowledge generation methods adopt the generation-filtering paradigm, while the produced negative statements are easy to detect and fail to contribute to both human perception and task-specific algorithms that require negative samples for training. In response, we put forward CONEG, a negative commonsense knowledge generation framework that generates confusing statements, featuring hierarchy modeling in candidate generation and LLM-enhanced two-stage filtering. Specifically, in the candidate generation stage, we identify congeners for entity phrases in the commonsense knowledge base using box embeddings, which can effectively capture the hierarchical correlations among entity phrases and produce confusing candidates. In the candidate filtering stage, we design a two-stage filtering strategy, consisting of intrinsic triple confidence measuring and extrinsic refinement through large language models with group-based instructions, which can effectively filter out true facts and low-quality negative candidates. We empirically evaluate our proposal on both intrinsic assessment and downstream tasks, and the results demonstrate that CONEG and its components are effective in terms of producing confusing negative knowledge, surpassing the state-of-the-art methods.