{"title":"使用 \"完美 \"网络模型评估网络安全软件工具","authors":"Jeremy Straub","doi":"arxiv-2409.09175","DOIUrl":null,"url":null,"abstract":"Cybersecurity software tool evaluation is difficult due to the inherently\nadversarial nature of the field. A penetration testing (or offensive) tool must\nbe tested against a viable defensive adversary and a defensive tool must,\nsimilarly, be tested against a viable offensive adversary. Characterizing the\ntool's performance inherently depends on the quality of the adversary, which\ncan vary from test to test. This paper proposes the use of a 'perfect' network,\nrepresenting computing systems, a network and the attack pathways through it as\na methodology to use for testing cybersecurity decision-making tools. This\nfacilitates testing by providing a known and consistent standard for\ncomparison. It also allows testing to include researcher-selected levels of\nerror, noise and uncertainty to evaluate cybersecurity tools under these\nexperimental conditions.","PeriodicalId":501332,"journal":{"name":"arXiv - CS - Cryptography and Security","volume":"18 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Cybersecurity Software Tool Evaluation Using a 'Perfect' Network Model\",\"authors\":\"Jeremy Straub\",\"doi\":\"arxiv-2409.09175\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Cybersecurity software tool evaluation is difficult due to the inherently\\nadversarial nature of the field. A penetration testing (or offensive) tool must\\nbe tested against a viable defensive adversary and a defensive tool must,\\nsimilarly, be tested against a viable offensive adversary. Characterizing the\\ntool's performance inherently depends on the quality of the adversary, which\\ncan vary from test to test. This paper proposes the use of a 'perfect' network,\\nrepresenting computing systems, a network and the attack pathways through it as\\na methodology to use for testing cybersecurity decision-making tools. This\\nfacilitates testing by providing a known and consistent standard for\\ncomparison. It also allows testing to include researcher-selected levels of\\nerror, noise and uncertainty to evaluate cybersecurity tools under these\\nexperimental conditions.\",\"PeriodicalId\":501332,\"journal\":{\"name\":\"arXiv - CS - Cryptography and Security\",\"volume\":\"18 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-13\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Cryptography and Security\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.09175\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Cryptography and Security","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.09175","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Cybersecurity Software Tool Evaluation Using a 'Perfect' Network Model
Cybersecurity software tool evaluation is difficult due to the inherently
adversarial nature of the field. A penetration testing (or offensive) tool must
be tested against a viable defensive adversary and a defensive tool must,
similarly, be tested against a viable offensive adversary. Characterizing the
tool's performance inherently depends on the quality of the adversary, which
can vary from test to test. This paper proposes the use of a 'perfect' network,
representing computing systems, a network and the attack pathways through it as
a methodology to use for testing cybersecurity decision-making tools. This
facilitates testing by providing a known and consistent standard for
comparison. It also allows testing to include researcher-selected levels of
error, noise and uncertainty to evaluate cybersecurity tools under these
experimental conditions.