Namuunbadralt Zolboot, Quinn Johnson, Dakun Shen, Alexander Redei
We are in the golden age of AI. Developing AI software for computer games is one of the most exciting trends of today’s day and age. Recently games like Hearthstone Bat- tlegrounds have captivated millions of players due to it’s sophistication, with an infinite number of unique interactions that can occur in the game. In this research, a Monte-Carlo simulation was built to help players achieve higher ranks. This was achieved through a learned simulation which was trained against a top Hearthstone Battleground player’s historic win. In our experiment, we collected 3 data sets from strategic Hearthstone Bat- tleground games. Each data set includes 6 turns of battle phases, 42 minions for battle boards, and 22 minions for Bob’s tavern. The evaluation demonstrated that the AI assis- tant achieved better performance — loosing on average only 9.56% of turns vs 26.26% for the experienced Hearthstone Battleground players, and winning 56% vs 46.91%.
{"title":"Hearthstone Battleground: An AI Assistant with Monte Carlo Tree Search","authors":"Namuunbadralt Zolboot, Quinn Johnson, Dakun Shen, Alexander Redei","doi":"10.29007/mn6n","DOIUrl":"https://doi.org/10.29007/mn6n","url":null,"abstract":"We are in the golden age of AI. Developing AI software for computer games is one of the most exciting trends of today’s day and age. Recently games like Hearthstone Bat- tlegrounds have captivated millions of players due to it’s sophistication, with an infinite number of unique interactions that can occur in the game. In this research, a Monte-Carlo simulation was built to help players achieve higher ranks. This was achieved through a learned simulation which was trained against a top Hearthstone Battleground player’s historic win. In our experiment, we collected 3 data sets from strategic Hearthstone Bat- tleground games. Each data set includes 6 turns of battle phases, 42 minions for battle boards, and 22 minions for Bob’s tavern. The evaluation demonstrated that the AI assis- tant achieved better performance — loosing on average only 9.56% of turns vs 26.26% for the experienced Hearthstone Battleground players, and winning 56% vs 46.91%.","PeriodicalId":93549,"journal":{"name":"EPiC series in computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"69445624","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Crowd Density Estimation (CDE) can be used ensure safety of crowds by preventing stampedes or reducing spread of disease which was made urgent with the rise of Covid-19. CDE a challenging problem due to problems such as occlusion and massive scale varia- tions. This research looks to create, evaluate and compare different approaches to crowd counting focusing on the ability for dilated convolution to extract scale-invariant contex- tual information. In this work we build and train three different model architectures: a Convolutional Neural Network (CNN) without dilation, a CNN with dilation to capture context and a CNN with an Atrous Spatial Pyramid Pooling (ASPP) layer to capture scale-invariant contextual features. We train each architecture multiple times to ensure statistical significance and evaluate them using the Mean Squared Error (MSE), Mean Average Error (MAE) and Grid Average Mean Absolute Error (GAME) on the Shang- haiTech and UCF CC 50 datasets. Comparing the results between approaches we find that applying dilated convolution to more sparse crowd images with little scale variations does not make a significant difference but, on highly congested crowd images, dilated con- volutions are more resilient to occlusion and perform better. Furthermore, we find that adding an ASPP layer improves performance in the case when there are significant differ- ences in the scale of objects within the crowds. The code for this research is available at https://github.com/ThishenP/crowd-density.
人群密度估算(CDE)可以通过防止踩踏事件或减少疾病传播来确保人群安全,这在Covid-19的兴起中变得迫在眉睫。由于诸如遮挡和大规模变化等问题,CDE是一个具有挑战性的问题。本研究旨在创建、评估和比较不同的人群计数方法,重点关注扩展卷积提取尺度不变上下文信息的能力。在这项工作中,我们建立并训练了三种不同的模型架构:一个没有扩张的卷积神经网络(CNN),一个有扩张的CNN来捕捉上下文,一个有空间金字塔池(ASPP)层来捕捉尺度不变的上下文特征。我们对每个架构进行了多次训练,以确保统计显著性,并使用上海科技和UCF CC 50数据集上的均方误差(MSE)、平均误差(MAE)和网格平均绝对误差(GAME)对它们进行评估。比较两种方法的结果,我们发现将扩展卷积应用于更稀疏的人群图像,规模变化较小,不会产生显着差异,但在高度拥挤的人群图像上,扩展卷积对遮挡更具弹性,表现更好。此外,我们发现在人群中对象的规模存在显著差异的情况下,添加ASPP层可以提高性能。这项研究的代码可在https://github.com/ThishenP/crowd-density上获得。
{"title":"Dilated Convolution to Capture Scale Invariant Context in Crowd Density Estimation","authors":"Thishen Packirisamy, Richard Klein","doi":"10.29007/qdm6","DOIUrl":"https://doi.org/10.29007/qdm6","url":null,"abstract":"Crowd Density Estimation (CDE) can be used ensure safety of crowds by preventing stampedes or reducing spread of disease which was made urgent with the rise of Covid-19. CDE a challenging problem due to problems such as occlusion and massive scale varia- tions. This research looks to create, evaluate and compare different approaches to crowd counting focusing on the ability for dilated convolution to extract scale-invariant contex- tual information. In this work we build and train three different model architectures: a Convolutional Neural Network (CNN) without dilation, a CNN with dilation to capture context and a CNN with an Atrous Spatial Pyramid Pooling (ASPP) layer to capture scale-invariant contextual features. We train each architecture multiple times to ensure statistical significance and evaluate them using the Mean Squared Error (MSE), Mean Average Error (MAE) and Grid Average Mean Absolute Error (GAME) on the Shang- haiTech and UCF CC 50 datasets. Comparing the results between approaches we find that applying dilated convolution to more sparse crowd images with little scale variations does not make a significant difference but, on highly congested crowd images, dilated con- volutions are more resilient to occlusion and perform better. Furthermore, we find that adding an ASPP layer improves performance in the case when there are significant differ- ences in the scale of objects within the crowds. The code for this research is available at https://github.com/ThishenP/crowd-density.","PeriodicalId":93549,"journal":{"name":"EPiC series in computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"69449602","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Artificial intelligence has become the mainstream technology. Automatic vacuum clean- ers or robot vacuums change the field of vacuum cleaners with an involvement of an au- tomation, which is a technology that makes people’s daily life easier and more economical. Robot vacuums were invented by the Massachusetts Institute of Technology in 1990. To- day, robot vacuums are successful and have many users all around the world. More than 2.5 million families live in 60 countries use them. However, a question that is still being asked about robot vacuum is the efficiency of room coverage and the ability to remem- ber the redundant areas that have already been cleaned. The answers to these questions are unclear, as manufacturers do not reveal the algorithms that are learned by robots, or sometimes they just partially did, due to business reasons. This study was proposed in response to the above questions by using our mobile application for tracking and recording actual geolocations of the robot walking across various points of the room by extracting the real geolocation data from satellites consisting of a latitude and a longitude under multiple different room conditions. Once the robot has cleaned throughout the room, the applica- tion reported all areas that the robot has cleaned for analysis purpose. We presented the actual route map, the coverage area map, and the duplicate area map of the robot that potentially led the further understanding of robot vacuum’s effectiveness.
{"title":"Verifying and Assessing a Performance of an Automatic Vacuum Robot under Different Room Conditions","authors":"Thitivatr Patanasakpinyo, Natcha Chen, Natthikarn Singsornsri, Nattapach Kanchanaporn","doi":"10.29007/w64v","DOIUrl":"https://doi.org/10.29007/w64v","url":null,"abstract":"Artificial intelligence has become the mainstream technology. Automatic vacuum clean- ers or robot vacuums change the field of vacuum cleaners with an involvement of an au- tomation, which is a technology that makes people’s daily life easier and more economical. Robot vacuums were invented by the Massachusetts Institute of Technology in 1990. To- day, robot vacuums are successful and have many users all around the world. More than 2.5 million families live in 60 countries use them. However, a question that is still being asked about robot vacuum is the efficiency of room coverage and the ability to remem- ber the redundant areas that have already been cleaned. The answers to these questions are unclear, as manufacturers do not reveal the algorithms that are learned by robots, or sometimes they just partially did, due to business reasons. This study was proposed in response to the above questions by using our mobile application for tracking and recording actual geolocations of the robot walking across various points of the room by extracting the real geolocation data from satellites consisting of a latitude and a longitude under multiple different room conditions. Once the robot has cleaned throughout the room, the applica- tion reported all areas that the robot has cleaned for analysis purpose. We presented the actual route map, the coverage area map, and the duplicate area map of the robot that potentially led the further understanding of robot vacuum’s effectiveness.","PeriodicalId":93549,"journal":{"name":"EPiC series in computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"69452920","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Christopher Lewis, Sven Diaz-Juarez, Steven J. Anbro, Alison J. Szarko, Ramona Houmanfar, Laura Crosswell, Michelle Rebaleati, Luka Starmer, Frederick Harris
Virtual reality (VR) is a relatively new and rapidly growing field which is becoming accessible by the larger research community as well as being commercially available for en- tertainment. Relatively cheap and commercially available head mounted displays (HMDs) are the largest reason for this increase in availability. This work uses Unity and an HMD to create a VR environment to display a 360◦video of a pre-recorded patient handoff be- tween a nurse and doctor. The VR environment went through different designs while in development. This works discusses each stage of it’s design and the unique challenges we encountered during development. This work also discusses the implementation of the user study and the visualization of collected eye tracking data.
{"title":"An Application for Simulating Patient Handoff Using 360 Video and Eye Tracking in Virtual Reality","authors":"Christopher Lewis, Sven Diaz-Juarez, Steven J. Anbro, Alison J. Szarko, Ramona Houmanfar, Laura Crosswell, Michelle Rebaleati, Luka Starmer, Frederick Harris","doi":"10.29007/82j6","DOIUrl":"https://doi.org/10.29007/82j6","url":null,"abstract":"Virtual reality (VR) is a relatively new and rapidly growing field which is becoming accessible by the larger research community as well as being commercially available for en- tertainment. Relatively cheap and commercially available head mounted displays (HMDs) are the largest reason for this increase in availability. This work uses Unity and an HMD to create a VR environment to display a 360◦video of a pre-recorded patient handoff be- tween a nurse and doctor. The VR environment went through different designs while in development. This works discusses each stage of it’s design and the unique challenges we encountered during development. This work also discusses the implementation of the user study and the visualization of collected eye tracking data.","PeriodicalId":93549,"journal":{"name":"EPiC series in computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"69423371","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
For many years Polish Higher Education Institutions (HEIs) have been actively incorporating digital solutions. Through the financial support of the state, as part of the Digital Poland program (carried out from 2014-2020), universities deployed student management systems and the Ministry of Science and Higher Education built central systems collecting data from HEIs. Changes in law introduced in 2019 in higher education, opened the way towards a fully electronic equivalent of the so-called student records folder.In early 2020, the world faced the COVID-19 pandemic which accelerated the digitalization process advancing forward the transition to remote handling of student information. As a result, many provisions were introduced by the Ministry of Science and Higher Education, which sanctioned the replacement of paper documents with electronic ones, provided that their authentication, integrity, non-repudiation and confidentiality are preserved.The amount and diversity of documents produced in HEIs is substantial. Many of them need to be signed: both for internal use (e.g. student records supporting study processes) and for external use (e.g. documents on student achievements required for further studies or employment). Furthermore, some of these documents travel across borders due to the increasing internationalization of higher education in Europe and beyond.Polish HEIs range in size from 1,000 to almost 50,000 students and often share the same student management system. Therefore, they would benefit from easy-to-share, customizable, simple to install and use, low-cost solution for storing digital certificates, signing documents and validating their signatures.The subject of this paper is the eSignForStudy project which addresses these needs. The objective of the project is to design and develop a highly configurable eSignature solution to be used in the Polish higher education area, interoperable with Erasmus Without Paper Network for cross-border digital document validation.
{"title":"Signing made easy – hiding complexity of eSignature solutions in a black box","authors":"Janina Mincer-Daszkiewicz, Tadeusz Gąsior","doi":"10.29007/7knd","DOIUrl":"https://doi.org/10.29007/7knd","url":null,"abstract":"For many years Polish Higher Education Institutions (HEIs) have been actively incorporating digital solutions. Through the financial support of the state, as part of the Digital Poland program (carried out from 2014-2020), universities deployed student management systems and the Ministry of Science and Higher Education built central systems collecting data from HEIs. Changes in law introduced in 2019 in higher education, opened the way towards a fully electronic equivalent of the so-called student records folder.In early 2020, the world faced the COVID-19 pandemic which accelerated the digitalization process advancing forward the transition to remote handling of student information. As a result, many provisions were introduced by the Ministry of Science and Higher Education, which sanctioned the replacement of paper documents with electronic ones, provided that their authentication, integrity, non-repudiation and confidentiality are preserved.The amount and diversity of documents produced in HEIs is substantial. Many of them need to be signed: both for internal use (e.g. student records supporting study processes) and for external use (e.g. documents on student achievements required for further studies or employment). Furthermore, some of these documents travel across borders due to the increasing internationalization of higher education in Europe and beyond.Polish HEIs range in size from 1,000 to almost 50,000 students and often share the same student management system. Therefore, they would benefit from easy-to-share, customizable, simple to install and use, low-cost solution for storing digital certificates, signing documents and validating their signatures.The subject of this paper is the eSignForStudy project which addresses these needs. The objective of the project is to design and develop a highly configurable eSignature solution to be used in the Polish higher education area, interoperable with Erasmus Without Paper Network for cross-border digital document validation.","PeriodicalId":93549,"journal":{"name":"EPiC series in computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"69423437","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Data security is an increasing concern, not only in cloud storage data centers but also in personal computing and memory devices. It is important to maintain the confidentiality of data at rest against ransom and theft attacks. However, securing data, in runtime, into the drives is associated with performance penalties. In this paper, a study of the performance impact for software-based self-encrypting solid-state drives is presented. This performance evaluation is conducted on the NVMe subsystem which supports encryption and decryption of the user data on an I/O command basis. Additionally, this paper demonstrates the potential of encryption and decryption acceleration for data storage in self-encrypting drives.
{"title":"Performance Study of Software-based Encrypting Data at Rest","authors":"Luka Daoud, Hingkwan Huen","doi":"10.29007/1j1p","DOIUrl":"https://doi.org/10.29007/1j1p","url":null,"abstract":"Data security is an increasing concern, not only in cloud storage data centers but also in personal computing and memory devices. It is important to maintain the confidentiality of data at rest against ransom and theft attacks. However, securing data, in runtime, into the drives is associated with performance penalties. In this paper, a study of the performance impact for software-based self-encrypting solid-state drives is presented. This performance evaluation is conducted on the NVMe subsystem which supports encryption and decryption of the user data on an I/O command basis. Additionally, this paper demonstrates the potential of encryption and decryption acceleration for data storage in self-encrypting drives.","PeriodicalId":93549,"journal":{"name":"EPiC series in computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"69418931","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In recent years cybersecurity challenges and concerns have become a common theme for discussion by both the government and private sector. These challenges are partly brought on by the continued use of and dependence on information technology, such as the internet, wireless networks and the development and use of smart devices. Additionally, the Covid-19 pandemic has also led to the increase in internet use as it altered the way in which people live and work through forcing businesses and even schools to move to remote working. All these events have made cybersecurity challenges and concerns spiral and more so in Africa where cybercrime continues to rise and be a constant threat. This study proposes a cybersecurity community of practice as a strategy to address African contextual cybersecurity challenges. This qualitative enquiry, based on organizations on the African continent, identifies key characteristics and objectives of an African cybersecurity CoP. These findings provide practical implications for CoP African members and a steppingstone on what to consider prior to implementing an African CoP for addressing cybersecurity challenges and concerns.
{"title":"Towards an African cybersecurity community of practice","authors":"Rutendo Chibanda, S. Kabanda","doi":"10.29007/cv1x","DOIUrl":"https://doi.org/10.29007/cv1x","url":null,"abstract":"In recent years cybersecurity challenges and concerns have become a common theme for discussion by both the government and private sector. These challenges are partly brought on by the continued use of and dependence on information technology, such as the internet, wireless networks and the development and use of smart devices. Additionally, the Covid-19 pandemic has also led to the increase in internet use as it altered the way in which people live and work through forcing businesses and even schools to move to remote working. All these events have made cybersecurity challenges and concerns spiral and more so in Africa where cybercrime continues to rise and be a constant threat. This study proposes a cybersecurity community of practice as a strategy to address African contextual cybersecurity challenges. This qualitative enquiry, based on organizations on the African continent, identifies key characteristics and objectives of an African cybersecurity CoP. These findings provide practical implications for CoP African members and a steppingstone on what to consider prior to implementing an African CoP for addressing cybersecurity challenges and concerns.","PeriodicalId":93549,"journal":{"name":"EPiC series in computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"69431717","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sofia Borgato, Marco Bottino, Marta Lovino, E. Ficarra
As of late 2019, the SARS-CoV-2 virus has spread globally, giving several variants over time. These variants, unfortunately, differ from the original sequence identified in Wuhan, thus risking compromising the efficacy of the vaccines developed. Some software has been released to recognize currently known and newly spread variants. However, some of these tools are not entirely automatic. Some others, instead, do not return a detailed characterization of all the mutations in the samples. Indeed, such characterization can be helpful for biologists to understand the variability between samples. This paper presents a Machine Learning (ML) approach to identifying existing and new variants completely automatically. In addition, a detailed table showing all the alterations and mutations found in the samples is provided in output to the user. SARS-CoV-2 sequences are obtained from the GISAID database, and a list of features is custom designed (e.g., number of mutations in each gene of the virus) to train the algorithm. The recognition of existing variants is performed through a Random Forest classifier while identifying newly spread variants is accomplished by the DBSCAN algorithm. Both Random Forest and DBSCAN techniques demonstrated high precision on a new variant that arose during the drafting of this paper (used only in the testing phase of the algorithm). Therefore, researchers will significantly benefit from the proposed algorithm and the detailed output with the main alterations of the samples.
{"title":"SARS-CoV-2 variants classification and characterization","authors":"Sofia Borgato, Marco Bottino, Marta Lovino, E. Ficarra","doi":"10.29007/5qpk","DOIUrl":"https://doi.org/10.29007/5qpk","url":null,"abstract":"As of late 2019, the SARS-CoV-2 virus has spread globally, giving several variants over time. These variants, unfortunately, differ from the original sequence identified in Wuhan, thus risking compromising the efficacy of the vaccines developed. Some software has been released to recognize currently known and newly spread variants. However, some of these tools are not entirely automatic. Some others, instead, do not return a detailed characterization of all the mutations in the samples. Indeed, such characterization can be helpful for biologists to understand the variability between samples. This paper presents a Machine Learning (ML) approach to identifying existing and new variants completely automatically. In addition, a detailed table showing all the alterations and mutations found in the samples is provided in output to the user. SARS-CoV-2 sequences are obtained from the GISAID database, and a list of features is custom designed (e.g., number of mutations in each gene of the virus) to train the algorithm. The recognition of existing variants is performed through a Random Forest classifier while identifying newly spread variants is accomplished by the DBSCAN algorithm. Both Random Forest and DBSCAN techniques demonstrated high precision on a new variant that arose during the drafting of this paper (used only in the testing phase of the algorithm). Therefore, researchers will significantly benefit from the proposed algorithm and the detailed output with the main alterations of the samples.","PeriodicalId":93549,"journal":{"name":"EPiC series in computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"69422483","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sarcasm is generally characterized as ironic or satirical that is intended to blame, mock, or amuse in an implied way. Recently, pre-trained language models, such as BERT, have achieved remarkable success in sarcasm detection. However, there are many problems that cannot be solved by using such state-of-the-art models. One problem is attribute infor- mation of entities in sentences. This work investigates the potential of external knowledge about entities in knowledge bases to improve BERT for sarcasm detection. We apply em- bedded knowledge graph from Wikipedia to the task. We generate vector representations from entities of knowledge graph. Then we incorporate them with BERT by a mechanism based on self-attention. Experimental results indicate that our approach improves the accuracy as compared with the BERT model without external knowledge.
{"title":"Sarcasm Detection with External Entity Information","authors":"Xu Xufei, Shimada Kazutaka","doi":"10.29007/zbzq","DOIUrl":"https://doi.org/10.29007/zbzq","url":null,"abstract":"Sarcasm is generally characterized as ironic or satirical that is intended to blame, mock, or amuse in an implied way. Recently, pre-trained language models, such as BERT, have achieved remarkable success in sarcasm detection. However, there are many problems that cannot be solved by using such state-of-the-art models. One problem is attribute infor- mation of entities in sentences. This work investigates the potential of external knowledge about entities in knowledge bases to improve BERT for sarcasm detection. We apply em- bedded knowledge graph from Wikipedia to the task. We generate vector representations from entities of knowledge graph. Then we incorporate them with BERT by a mechanism based on self-attention. Experimental results indicate that our approach improves the accuracy as compared with the BERT model without external knowledge.","PeriodicalId":93549,"journal":{"name":"EPiC series in computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"69454181","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Masking is a promising countermeasure against side-channel attack, and share slic- ing is its efficient software implementation that stores all the shares in a single register to exploit the parallelism of Boolean instructions. However, the security of share slicing relies on the assumption of bit-independent leakage from those instructions. Gao et al. recently discovered a violation causing a security degradation, called the bit-interaction leakage, by experimentally evaluating ARM processors. However, its causality remained open because of the blackbox inside the target processors. In this paper, we approach this problem with simulation-based side-channel leakage evaluation using a RISC-V processor. More specifically, we use Western Digital’s open-source SweRV EH1 core as a target plat- form and measure its side-channel traces by running logic simulation and counting the number of signal transitions in the synthesized ALU netlist. We successfully replicate the bit-interaction leakage from a shifter using the simulated traces. By exploiting the flexi- bility of simulation-based analysis, we positively verify Gao et al.’s hypothesis on how the shifter causes the leakage. Moreover, we discover a new bit-interaction leakage from an arithmetic adder caused by carry propagation. Finally, we discuss hardware and software countermeasures against the bit-interaction leakage.
{"title":"Simulation Based Evaluation of Bit-Interaction Side-Channel Leakage on RISC-V Processor","authors":"Tamon Asano, T. Sugawara","doi":"10.29007/5wq7","DOIUrl":"https://doi.org/10.29007/5wq7","url":null,"abstract":"Masking is a promising countermeasure against side-channel attack, and share slic- ing is its efficient software implementation that stores all the shares in a single register to exploit the parallelism of Boolean instructions. However, the security of share slicing relies on the assumption of bit-independent leakage from those instructions. Gao et al. recently discovered a violation causing a security degradation, called the bit-interaction leakage, by experimentally evaluating ARM processors. However, its causality remained open because of the blackbox inside the target processors. In this paper, we approach this problem with simulation-based side-channel leakage evaluation using a RISC-V processor. More specifically, we use Western Digital’s open-source SweRV EH1 core as a target plat- form and measure its side-channel traces by running logic simulation and counting the number of signal transitions in the synthesized ALU netlist. We successfully replicate the bit-interaction leakage from a shifter using the simulated traces. By exploiting the flexi- bility of simulation-based analysis, we positively verify Gao et al.’s hypothesis on how the shifter causes the leakage. Moreover, we discover a new bit-interaction leakage from an arithmetic adder caused by carry propagation. Finally, we discuss hardware and software countermeasures against the bit-interaction leakage.","PeriodicalId":93549,"journal":{"name":"EPiC series in computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"69422270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}