基于自然语言处理的非法暗网分类:基于文本信息的网页非法内容分类

Giuseppe Cascavilla, Gemma Catolino, Mirella Sangiovanni
{"title":"基于自然语言处理的非法暗网分类:基于文本信息的网页非法内容分类","authors":"Giuseppe Cascavilla, Gemma Catolino, Mirella Sangiovanni","doi":"10.5220/0011298600003283","DOIUrl":null,"url":null,"abstract":"This work aims at expanding previous works done in the context of illegal activities classification, performing three different steps. First, we created a heterogeneous dataset of 113995 onion sites and dark marketplaces. Then, we compared pre-trained transferable models, i.e., ULMFit (Universal Language Model Fine-tuning), Bert (Bidirectional Encoder Representations from Transformers), and RoBERTa (Robustly optimized BERT approach) with a traditional text classification approach like LSTM (Long short-term memory) neural networks. Finally, we developed two illegal activities classification approaches, one for illicit content on the Dark Web and one for identifying the specific types of drugs. Results show that Bert obtained the best approach, classifying the dark web's general content and the types of Drugs with 96.08% and 91.98% of accuracy.","PeriodicalId":74779,"journal":{"name":"SECRYPT ... : proceedings of the International Conference on Security and Cryptography. International Conference on Security and Cryptography","volume":"8 1","pages":"620-626"},"PeriodicalIF":0.0000,"publicationDate":"2023-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Illicit Darkweb Classification via Natural-language Processing: Classifying Illicit Content of Webpages based on Textual Information\",\"authors\":\"Giuseppe Cascavilla, Gemma Catolino, Mirella Sangiovanni\",\"doi\":\"10.5220/0011298600003283\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This work aims at expanding previous works done in the context of illegal activities classification, performing three different steps. First, we created a heterogeneous dataset of 113995 onion sites and dark marketplaces. Then, we compared pre-trained transferable models, i.e., ULMFit (Universal Language Model Fine-tuning), Bert (Bidirectional Encoder Representations from Transformers), and RoBERTa (Robustly optimized BERT approach) with a traditional text classification approach like LSTM (Long short-term memory) neural networks. Finally, we developed two illegal activities classification approaches, one for illicit content on the Dark Web and one for identifying the specific types of drugs. Results show that Bert obtained the best approach, classifying the dark web's general content and the types of Drugs with 96.08% and 91.98% of accuracy.\",\"PeriodicalId\":74779,\"journal\":{\"name\":\"SECRYPT ... : proceedings of the International Conference on Security and Cryptography. International Conference on Security and Cryptography\",\"volume\":\"8 1\",\"pages\":\"620-626\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-12-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"SECRYPT ... : proceedings of the International Conference on Security and Cryptography. International Conference on Security and Cryptography\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.5220/0011298600003283\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"SECRYPT ... : proceedings of the International Conference on Security and Cryptography. International Conference on Security and Cryptography","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.5220/0011298600003283","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

摘要

:本工作旨在扩展以往在非法活动分类背景下所做的工作,执行三个不同的步骤。首先,我们创建了一个包含113995个洋葱网站和黑市的异构数据集。然后,我们比较了预训练的可转移模型,即ULMFit(通用语言模型微调),Bert(来自变形变压器的双向编码器表示)和RoBERTa(鲁棒优化的Bert方法)与传统的文本分类方法,如LSTM(长短期记忆)神经网络。最后,我们开发了两种非法活动分类方法,一种用于暗网上的非法内容,另一种用于识别特定类型的药物。结果表明,Bert获得了最好的方法,对暗网的一般内容和药物类型进行分类,准确率分别为96.08%和91.98%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Illicit Darkweb Classification via Natural-language Processing: Classifying Illicit Content of Webpages based on Textual Information
This work aims at expanding previous works done in the context of illegal activities classification, performing three different steps. First, we created a heterogeneous dataset of 113995 onion sites and dark marketplaces. Then, we compared pre-trained transferable models, i.e., ULMFit (Universal Language Model Fine-tuning), Bert (Bidirectional Encoder Representations from Transformers), and RoBERTa (Robustly optimized BERT approach) with a traditional text classification approach like LSTM (Long short-term memory) neural networks. Finally, we developed two illegal activities classification approaches, one for illicit content on the Dark Web and one for identifying the specific types of drugs. Results show that Bert obtained the best approach, classifying the dark web's general content and the types of Drugs with 96.08% and 91.98% of accuracy.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Illicit Darkweb Classification via Natural-language Processing: Classifying Illicit Content of Webpages based on Textual Information When the Few Outweigh the Many: Illicit Content Recognition with Few-Shot Learning SQLi Detection with ML: A data-source perspective Combining Generators of Adversarial Malware Examples to Increase Evasion Rate CAPoW: Context-Aware AI-Assisted Proof of Work based DDoS Defense
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1