Nowadays, there are large country-sized fingerprint databases for identification, border access controls and also for Visa issuance procedures around the world. Fingerprint indexing techniques aim to speed up the research process in automatic fingerprint identification systems. Therefore, several preselection, classification and indexing techniques have been proposed in the literature. Even if, the proposed systems have been evaluated with different experimental protocols, it is difficult to assess their relative performance. The main objective of this paper is to provide a comparative study of fingerprint indexing methods using a common experimental protocol. Four fingerprint indexing methods, using naive, cascade, matcher and Minutiae Cylinder Code (MCC) approaches are evaluated on FVC databases from the Fingerprint Verification Competition (FVC) using the Cumulative Matches Curve (CMC) and the computing time required. Our study shows that MCC gives the best compromise between identification accuracy and computation time.
{"title":"Comparative Study of Fingerprint Database Indexing Methods","authors":"Joannes Falade, Sandra Cremer, C. Rosenberger","doi":"10.1109/CW.2019.00055","DOIUrl":"https://doi.org/10.1109/CW.2019.00055","url":null,"abstract":"Nowadays, there are large country-sized fingerprint databases for identification, border access controls and also for Visa issuance procedures around the world. Fingerprint indexing techniques aim to speed up the research process in automatic fingerprint identification systems. Therefore, several preselection, classification and indexing techniques have been proposed in the literature. Even if, the proposed systems have been evaluated with different experimental protocols, it is difficult to assess their relative performance. The main objective of this paper is to provide a comparative study of fingerprint indexing methods using a common experimental protocol. Four fingerprint indexing methods, using naive, cascade, matcher and Minutiae Cylinder Code (MCC) approaches are evaluated on FVC databases from the Fingerprint Verification Competition (FVC) using the Cumulative Matches Curve (CMC) and the computing time required. Our study shows that MCC gives the best compromise between identification accuracy and computation time.","PeriodicalId":117409,"journal":{"name":"2019 International Conference on Cyberworlds (CW)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129770251","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Light art represents various objects by a light stroke drawn in the air. It takes about 10 to 30 seconds to create a light art picture. We could create a light art movie by binding a set of pictures; however, it is a time-consuming task because we need a large number of frames to create a movie and difficult because a person would need to author a sequence of temporally consistent frames. To solve this problem, we are developing a virtual and interactive light-art-like system. This system extracts edges of human bodies from depth information and then applies a point-to-curve algorithm to mimic hand-drawn figures. Finally, the system applies neon-like drawing and a visual effect on the figures and displays them in real-time. This paper also introduces our artwork produced with this system, and a user evaluation that shows that the artwork conveyed happiness and excitement to the audience.
{"title":"A Virtual and Interactive Light-Art-Like Representation of Human Silhouette","authors":"Momoko Tsuchiya, T. Itoh, Michael Neff, Yuhan Liu","doi":"10.1109/CW.2019.00078","DOIUrl":"https://doi.org/10.1109/CW.2019.00078","url":null,"abstract":"Light art represents various objects by a light stroke drawn in the air. It takes about 10 to 30 seconds to create a light art picture. We could create a light art movie by binding a set of pictures; however, it is a time-consuming task because we need a large number of frames to create a movie and difficult because a person would need to author a sequence of temporally consistent frames. To solve this problem, we are developing a virtual and interactive light-art-like system. This system extracts edges of human bodies from depth information and then applies a point-to-curve algorithm to mimic hand-drawn figures. Finally, the system applies neon-like drawing and a visual effect on the figures and displays them in real-time. This paper also introduces our artwork produced with this system, and a user evaluation that shows that the artwork conveyed happiness and excitement to the audience.","PeriodicalId":117409,"journal":{"name":"2019 International Conference on Cyberworlds (CW)","volume":"344 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131092491","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alejandro Guerra-Manzanares, Hayretdin Bahsi, S. Nõmm
Timely detection of intrusions is essential in IoT networks, considering the massive attacks launched by the huge-sized botnets which are composed of insecure devices. Machine learning methods have demonstrated promising results for the detection of such attacks. However, the effectiveness of such methods may greatly benefit from the reduction of feature set size as this may prevent the impeding impact of unnecessary features and minimize the computational resources required for intrusion detection in such networks having several limitations. This paper elaborates on feature selection methods applied to machine learning models which are induced for botnet detection in IoT networks. A particular attention is devoted to the use of wrapper methods and their combination with filter methods. While filter-based feature selection methods provide a computationally light approach to select the most informative features, it is shown that their utilization in combination with wrapper methods boosts up the detection accuracy.
{"title":"Hybrid Feature Selection Models for Machine Learning Based Botnet Detection in IoT Networks","authors":"Alejandro Guerra-Manzanares, Hayretdin Bahsi, S. Nõmm","doi":"10.1109/CW.2019.00059","DOIUrl":"https://doi.org/10.1109/CW.2019.00059","url":null,"abstract":"Timely detection of intrusions is essential in IoT networks, considering the massive attacks launched by the huge-sized botnets which are composed of insecure devices. Machine learning methods have demonstrated promising results for the detection of such attacks. However, the effectiveness of such methods may greatly benefit from the reduction of feature set size as this may prevent the impeding impact of unnecessary features and minimize the computational resources required for intrusion detection in such networks having several limitations. This paper elaborates on feature selection methods applied to machine learning models which are induced for botnet detection in IoT networks. A particular attention is devoted to the use of wrapper methods and their combination with filter methods. While filter-based feature selection methods provide a computationally light approach to select the most informative features, it is shown that their utilization in combination with wrapper methods boosts up the detection accuracy.","PeriodicalId":117409,"journal":{"name":"2019 International Conference on Cyberworlds (CW)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129604358","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper records the research and development employed to create the visuals and concept art for "La Petite Fee Cosmo". "La Petite Fee Cosmo" is a 2D interactive learning game created to encourage learning through innovative teaching. The concept of productive failure served as a backbone for the game. Various creative art directions and methods were explored during the process so as to create the desired visual assets for the game. Experimentation was done in both digital and traditional mediums such as watercolour. Watercolour served as a useful medium to create fresh visuals to convey ideas. The spontaneity in which watercolour pigments interact with cotton paper creates interesting textures and visual interest. Watercolour was originally intended to be used as the primary medium for the game art however it was later replaced by digital mediums due to technical complications. Concept designs for characters, environments and props were created to assist the visualization process during the production of the game. Many of these visuals are inspired by actual locations and various artists. Emphasis was placed on creating refined visual assets to allow for increased appeal as well as increased engagement and interest.
本文记录了为“La Petite Fee Cosmo”创作视觉和概念艺术的研究和开发过程。“La Petite Fee Cosmo”是一款2D互动学习游戏,旨在通过创新教学鼓励学习。生产性失败的概念是这款游戏的支柱。在这个过程中,我们探索了各种创造性的艺术方向和方法,从而为游戏创造出理想的视觉资产。实验是在数字和传统的媒介,如水彩。水彩画是一种有用的媒介,可以创造新鲜的视觉效果来传达想法。水彩颜料与棉纸的自发性相互作用创造出有趣的纹理和视觉趣味。水彩原本是作为游戏艺术的主要媒介,但由于技术上的复杂性,它后来被数字媒介所取代。角色、环境和道具的概念设计是为了帮助游戏制作过程中的可视化过程。这些视觉效果的许多灵感都来自于真实的地点和不同的艺术家。重点放在创建精致的视觉资产,以增加吸引力,增加粘性和兴趣。
{"title":"The Art of La Petite Fee Cosmo","authors":"Hui En Lye, V. Kannappan, Jeffrey Hong","doi":"10.1109/CW.2019.00069","DOIUrl":"https://doi.org/10.1109/CW.2019.00069","url":null,"abstract":"This paper records the research and development employed to create the visuals and concept art for \"La Petite Fee Cosmo\". \"La Petite Fee Cosmo\" is a 2D interactive learning game created to encourage learning through innovative teaching. The concept of productive failure served as a backbone for the game. Various creative art directions and methods were explored during the process so as to create the desired visual assets for the game. Experimentation was done in both digital and traditional mediums such as watercolour. Watercolour served as a useful medium to create fresh visuals to convey ideas. The spontaneity in which watercolour pigments interact with cotton paper creates interesting textures and visual interest. Watercolour was originally intended to be used as the primary medium for the game art however it was later replaced by digital mediums due to technical complications. Concept designs for characters, environments and props were created to assist the visualization process during the production of the game. Many of these visuals are inspired by actual locations and various artists. Emphasis was placed on creating refined visual assets to allow for increased appeal as well as increased engagement and interest.","PeriodicalId":117409,"journal":{"name":"2019 International Conference on Cyberworlds (CW)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129607461","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
One of the most common hackers attacks on organizations public communication infrastructure is website defacement. This attack consists of modifying the appearance of a website by affixing a signature or a particular message or making the website inactive. The goals of web defacement are diverse and range from simply recognizing the technical prowess of the hacker to claims messages posted on the victim's website by minority groups, referred to as hacktivism. The main consequence of this attack is the loss of credibility of the hacked organization. This can, in some cases, lead to indirect economic losses because of the distorted web content conveyed by the hacked organization. Since websites carry a very large amount of information, it is very important to protect them from this form of attack. In most cases, the defense against web defacement relies on monitoring websites and restoring the system after the incident occurred. The time between the execution of the attack and the system's restoration time is highly dependent on the performance of the website's monitoring tool and the response capacity of the technical teams. Most of website defacement defense tools available on the market are generally based on the verification of the integrity of the data and the notification of the administrators when signatures change. This technique is more or less effective for static websites subjected to weak modification cycles. For dynamic websites, interfaced with databases or syndicated, where the changes are relatively short and random, it becomes almost impossible to use techniques based solely on signature verification and data integrity to know if a website was attacked. This work proposes a model that combines several techniques (data integrity analysis, changes of the value of an artifice and the adoption of high availability architecture) to be used to develop a tool against this type of attacks.
{"title":"A Contribution to Detect and Prevent a Website Defacement","authors":"Barerem-Melgueba Mao, Kanlanfei Damnam Bagolibe","doi":"10.1109/CW.2019.00062","DOIUrl":"https://doi.org/10.1109/CW.2019.00062","url":null,"abstract":"One of the most common hackers attacks on organizations public communication infrastructure is website defacement. This attack consists of modifying the appearance of a website by affixing a signature or a particular message or making the website inactive. The goals of web defacement are diverse and range from simply recognizing the technical prowess of the hacker to claims messages posted on the victim's website by minority groups, referred to as hacktivism. The main consequence of this attack is the loss of credibility of the hacked organization. This can, in some cases, lead to indirect economic losses because of the distorted web content conveyed by the hacked organization. Since websites carry a very large amount of information, it is very important to protect them from this form of attack. In most cases, the defense against web defacement relies on monitoring websites and restoring the system after the incident occurred. The time between the execution of the attack and the system's restoration time is highly dependent on the performance of the website's monitoring tool and the response capacity of the technical teams. Most of website defacement defense tools available on the market are generally based on the verification of the integrity of the data and the notification of the administrators when signatures change. This technique is more or less effective for static websites subjected to weak modification cycles. For dynamic websites, interfaced with databases or syndicated, where the changes are relatively short and random, it becomes almost impossible to use techniques based solely on signature verification and data integrity to know if a website was attacked. This work proposes a model that combines several techniques (data integrity analysis, changes of the value of an artifice and the adoption of high availability architecture) to be used to develop a tool against this type of attacks.","PeriodicalId":117409,"journal":{"name":"2019 International Conference on Cyberworlds (CW)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115452098","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As the will to deploy neural network models on embedded systems grows, and considering the related memory footprint and energy consumption requirements, finding lighter solutions to store neural networks such as parameter quantization and more efficient inference methods becomes major research topics. Parallel to that, adversarial machine learning has risen recently, unveiling some critical flaws of machine learning models, especially neural networks. In particular, perturbed inputs called adversarial examples have been shown to fool a model into making incorrect predictions. In this paper, we investigate the adversarial robustness of quantized neural networks under different attacks. We show that quantization is not a robust protection when considering advanced threats and may result in severe form of gradient masking which leads to a false impression of security. However, and interestingly, we experimentally observe poor transferability capacities between full-precision and quantized models and between models with different quantization levels which we explain by the quantization value shift phenomenon and gradient misalignment.
{"title":"Impact of Low-Bitwidth Quantization on the Adversarial Robustness for Embedded Neural Networks","authors":"Rémi Bernhard, Pierre-Alain Moëllic, J. Dutertre","doi":"10.1109/CW.2019.00057","DOIUrl":"https://doi.org/10.1109/CW.2019.00057","url":null,"abstract":"As the will to deploy neural network models on embedded systems grows, and considering the related memory footprint and energy consumption requirements, finding lighter solutions to store neural networks such as parameter quantization and more efficient inference methods becomes major research topics. Parallel to that, adversarial machine learning has risen recently, unveiling some critical flaws of machine learning models, especially neural networks. In particular, perturbed inputs called adversarial examples have been shown to fool a model into making incorrect predictions. In this paper, we investigate the adversarial robustness of quantized neural networks under different attacks. We show that quantization is not a robust protection when considering advanced threats and may result in severe form of gradient masking which leads to a false impression of security. However, and interestingly, we experimentally observe poor transferability capacities between full-precision and quantized models and between models with different quantization levels which we explain by the quantization value shift phenomenon and gradient misalignment.","PeriodicalId":117409,"journal":{"name":"2019 International Conference on Cyberworlds (CW)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116639276","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Preventing organizations from Cyber exploits needs timely intelligence about Cyber vulnerabilities and attacks, referred to as threats. Cyber threat intelligence can be extracted from various sources including social media platforms where users publish the threat information in real-time. Gathering Cyber threat intelligence from social media sites is a time-consuming task for security analysts that can delay timely response to emerging Cyber threats. We propose a framework for automatically gathering Cyber threat intelligence from Twitter by using a novelty detection model. Our model learns the features of Cyber threat intelligence from the threat descriptions published in public repositories such as Common Vulnerabilities and Exposures (CVE) and classifies a new unseen tweet as either normal or anomalous to Cyber threat intelligence. We evaluate our framework using a purpose-built data set of tweets from 50 influential Cyber security-related accounts over twelve months (in 2018). Our classifier achieves the F1-score of 0.643 for classifying Cyber threat tweets and outperforms several baselines including binary classification models. Analysis of the classification results suggests that Cyber threat-relevant tweets on Twitter do not often include the CVE identifier of the related threats. Hence, it would be valuable to collect these tweets and associate them with the related CVE identifier for Cyber security applications.
为了防止组织受到网络攻击,需要及时获得有关网络漏洞和攻击(即威胁)的情报。网络威胁情报可以从各种来源提取,包括用户实时发布威胁信息的社交媒体平台。对于安全分析师来说,从社交媒体网站收集网络威胁情报是一项耗时的任务,可能会延迟对新出现的网络威胁的及时响应。本文提出了一种利用新颖性检测模型自动收集Twitter网络威胁情报的框架。我们的模型从公共存储库(如Common Vulnerabilities and Exposures, CVE)中发布的威胁描述中学习网络威胁情报的特征,并将一条新的未见过的推文分类为正常或异常的网络威胁情报。我们使用专门构建的数据集来评估我们的框架,这些数据集来自50个有影响力的网络安全相关账户,历时12个月(2018年)。我们的分类器对网络威胁推文进行分类的f1得分为0.643,并且优于包括二元分类模型在内的几个基线。对分类结果的分析表明,Twitter上与网络威胁相关的推文通常不包含相关威胁的CVE标识符。因此,收集这些tweet并将它们与网络安全应用程序的相关CVE标识符关联起来是很有价值的。
{"title":"Gathering Cyber Threat Intelligence from Twitter Using Novelty Classification","authors":"Ba-Dung Le, Guanhua Wang, Mehwish Nasim, M. Babar","doi":"10.1109/CW.2019.00058","DOIUrl":"https://doi.org/10.1109/CW.2019.00058","url":null,"abstract":"Preventing organizations from Cyber exploits needs timely intelligence about Cyber vulnerabilities and attacks, referred to as threats. Cyber threat intelligence can be extracted from various sources including social media platforms where users publish the threat information in real-time. Gathering Cyber threat intelligence from social media sites is a time-consuming task for security analysts that can delay timely response to emerging Cyber threats. We propose a framework for automatically gathering Cyber threat intelligence from Twitter by using a novelty detection model. Our model learns the features of Cyber threat intelligence from the threat descriptions published in public repositories such as Common Vulnerabilities and Exposures (CVE) and classifies a new unseen tweet as either normal or anomalous to Cyber threat intelligence. We evaluate our framework using a purpose-built data set of tweets from 50 influential Cyber security-related accounts over twelve months (in 2018). Our classifier achieves the F1-score of 0.643 for classifying Cyber threat tweets and outperforms several baselines including binary classification models. Analysis of the classification results suggests that Cyber threat-relevant tweets on Twitter do not often include the CVE identifier of the related threats. Hence, it would be valuable to collect these tweets and associate them with the related CVE identifier for Cyber security applications.","PeriodicalId":117409,"journal":{"name":"2019 International Conference on Cyberworlds (CW)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115563817","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Internet offers an abundance of online sources for trending topics and news. However, this gives rise to the issue of content overload, where users must filter through large amount of content to find those that are of relevance or interest to them. This project aims to solve this issue by creating a web application called Twittener. Twittener aims to improve users' experience and time-efficiency when reading news online. Methods used include text-to-speech technology, sentiment analysis and recommender system. Text-to-speech technology enables users to listen to tweets and news without paying attention to their screens. This could also be useful for populations with visual impairments. Sentiment analysis on Twitter trends provides useful information on general sentiment towards each trend and a hybrid recommender system is deployed to recommend news that would likely be of interest to users. This paper seeks to document the development, implementation, design and implications of Twittener.
{"title":"Twittener: An Aggregated News Platform","authors":"Owen Noel Newton Fernando, Chan-Wei Chang","doi":"10.1109/CW.2019.00071","DOIUrl":"https://doi.org/10.1109/CW.2019.00071","url":null,"abstract":"The Internet offers an abundance of online sources for trending topics and news. However, this gives rise to the issue of content overload, where users must filter through large amount of content to find those that are of relevance or interest to them. This project aims to solve this issue by creating a web application called Twittener. Twittener aims to improve users' experience and time-efficiency when reading news online. Methods used include text-to-speech technology, sentiment analysis and recommender system. Text-to-speech technology enables users to listen to tweets and news without paying attention to their screens. This could also be useful for populations with visual impairments. Sentiment analysis on Twitter trends provides useful information on general sentiment towards each trend and a hybrid recommender system is deployed to recommend news that would likely be of interest to users. This paper seeks to document the development, implementation, design and implications of Twittener.","PeriodicalId":117409,"journal":{"name":"2019 International Conference on Cyberworlds (CW)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129800559","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}