The anonymity and convenience of social media platforms enable the public to express and even vent themselves, which drives a surge of cyberviolence behaviors (CVB). Recent advances in machine learning, especially in deep learning, have drastically benefited CVB detection. However, despite the wide use of state-of-the-art deep-learning models, previous studies only analyzed each post/comment for the presence of (obfuscated) abusive text, which is not comprehensive and exact because the content posted online may not necessarily include negative words. In complex and conflicting situations, people may overlook implicit violence, leading to failures in situational judgment. Herein, we designed a well-grounded and explainable deep-learning framework based on the theory of planned behavior (TPB) to explore the motivations behind CVB to better detect it. Specifically, we constructed a systematic and comprehensive suite of computable features grounded in TPB and then proposed a novel model named Multilevel and Multiattribute Embedding CVB detection model considering Dual-view Contextual Information. Our framework detected implicit and explicit CVB with macro F1 scores of >88.67 %, outperforming state-of-the-art methods. We further provided differentiated strategies according to the scale and distribution of different classes of CVB and proposed related management implications. Our study sheds light on platform operations in managing online content and mitigating the risk of governance cost wastage and deterioration of the cyber ecosystem.