Verifying phishmon: a framework for dynamic webpage classification

J. Tomaselli, Austin Willoughby, Jorge Vargas Amezcua, Emma Delehanty, Katherine Floyd, Damien Wright, M. Lammers, R. Vetter
{"title":"Verifying phishmon: a framework for dynamic webpage classification","authors":"J. Tomaselli, Austin Willoughby, Jorge Vargas Amezcua, Emma Delehanty, Katherine Floyd, Damien Wright, M. Lammers, R. Vetter","doi":"10.1145/3409334.3452082","DOIUrl":null,"url":null,"abstract":"Phishing attacks are the scourge of the network security manager's job. Looking for a solution to counter this trend, this paper examines and verifies the efficacy of Phishmon, a machine learning framework for scrutinizing webpages that relies on technical attributes of the webpage's structure for classification. More specifically, each of the four machine learning algorithms mentioned in the original paper are applied to a portion of the data set used by Phishmon's creators in order to verify and confirm their results. This paper expands the author's original work in two ways. First, the Phishmon framework is applied to two additional machine learning models for comparison to the first group. Furthermore, dimension reduction and algorithm parameter optimization are explored to determine their effects on the Phishmon framework's accuracy. Our findings suggest improvements to the Phishmon framework's implementation. Namely, downsizing the dataset to include an equal number of phishing and benign webpages as the model is formed appears to balance the accuracy rates achieved for both phishing and benign webpages. Furthermore, removing features with very low relative importance values may save time and processing power while preserving a vast majority of the model's information.","PeriodicalId":148741,"journal":{"name":"Proceedings of the 2021 ACM Southeast Conference","volume":"54 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2021 ACM Southeast Conference","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3409334.3452082","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Phishing attacks are the scourge of the network security manager's job. Looking for a solution to counter this trend, this paper examines and verifies the efficacy of Phishmon, a machine learning framework for scrutinizing webpages that relies on technical attributes of the webpage's structure for classification. More specifically, each of the four machine learning algorithms mentioned in the original paper are applied to a portion of the data set used by Phishmon's creators in order to verify and confirm their results. This paper expands the author's original work in two ways. First, the Phishmon framework is applied to two additional machine learning models for comparison to the first group. Furthermore, dimension reduction and algorithm parameter optimization are explored to determine their effects on the Phishmon framework's accuracy. Our findings suggest improvements to the Phishmon framework's implementation. Namely, downsizing the dataset to include an equal number of phishing and benign webpages as the model is formed appears to balance the accuracy rates achieved for both phishing and benign webpages. Furthermore, removing features with very low relative importance values may save time and processing power while preserving a vast majority of the model's information.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
验证phishmon:一个动态网页分类框架
网络钓鱼攻击是网络安全经理工作的祸害。为了寻找应对这一趋势的解决方案,本文检查并验证了Phishmon的有效性,Phishmon是一种用于审查网页的机器学习框架,它依赖于网页结构的技术属性进行分类。更具体地说,原始论文中提到的四种机器学习算法中的每一种都应用于Phishmon的创建者使用的数据集的一部分,以验证和确认他们的结果。本文从两个方面展开了作者的原著。首先,Phishmon框架应用于另外两个机器学习模型,以便与第一组模型进行比较。进一步探讨了降维和算法参数优化对Phishmon框架精度的影响。我们的发现建议改进Phishmon框架的实现。也就是说,当模型形成时,缩小数据集以包括相同数量的网络钓鱼和良性网页,似乎可以平衡网络钓鱼和良性网页的准确率。此外,删除相对重要值非常低的特征可以节省时间和处理能力,同时保留绝大多数模型的信息。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Application of back-translation: a transfer learning approach to identify ambiguous software requirements A survey of wireless network simulation and/or emulation software for use in higher education Implementing a network intrusion detection system using semi-supervised support vector machine and random forest Performance evaluation of a widely used implementation of the MQTT protocol with large payloads in normal operation and under a DoS attack Benefits of combining dimensional attention and working memory for partially observable reinforcement learning problems
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1