Multi-domain sentiment classification trains a classifier using multiple domains and then tests the classifier on one of the domains. Importantly, no domain is assumed to have sufficient labeled data; instead, the goal is leveraging information between domains, making multi-domain sentiment classification a very realistic scenario. Typically, labeled data is costly because humans must classify it manually. In this context, we propose the MUTUAL approach that learns general and domain-specific sentence embeddings that are also context-aware due to the attention mechanism. In this work, we propose using a stacked BiLSTM-based Autoencoder with an attention mechanism to generate the two above-mentioned types of sentence embeddings. Then, using the Jensen-Shannon (JS) distance, the general sentence embeddings of the four most similar domains to the target domain are selected. The selected general sentence embeddings and the domain-specific embeddings are concatenated and fed into a dense layer for training. Evaluation results on public datasets with 16 different domains demonstrate the efficiency of our model. In addition, we propose an active learning algorithm that first applies the elliptic envelope for outlier removal to a pool of unlabeled data that the MUTUAL model then classifies. Next, the most uncertain data points are selected to be labeled based on the least confidence metric. The experiments show higher accuracy for querying 38% of the original data than random sampling.
{"title":"MUTUAL: Multi-Domain Sentiment Classification via Uncertainty Sampling","authors":"K. Katsarou, Roxana Jeney, K. Stefanidis","doi":"10.1145/3555776.3577765","DOIUrl":"https://doi.org/10.1145/3555776.3577765","url":null,"abstract":"Multi-domain sentiment classification trains a classifier using multiple domains and then tests the classifier on one of the domains. Importantly, no domain is assumed to have sufficient labeled data; instead, the goal is leveraging information between domains, making multi-domain sentiment classification a very realistic scenario. Typically, labeled data is costly because humans must classify it manually. In this context, we propose the MUTUAL approach that learns general and domain-specific sentence embeddings that are also context-aware due to the attention mechanism. In this work, we propose using a stacked BiLSTM-based Autoencoder with an attention mechanism to generate the two above-mentioned types of sentence embeddings. Then, using the Jensen-Shannon (JS) distance, the general sentence embeddings of the four most similar domains to the target domain are selected. The selected general sentence embeddings and the domain-specific embeddings are concatenated and fed into a dense layer for training. Evaluation results on public datasets with 16 different domains demonstrate the efficiency of our model. In addition, we propose an active learning algorithm that first applies the elliptic envelope for outlier removal to a pool of unlabeled data that the MUTUAL model then classifies. Next, the most uncertain data points are selected to be labeled based on the least confidence metric. The experiments show higher accuracy for querying 38% of the original data than random sampling.","PeriodicalId":42971,"journal":{"name":"Applied Computing Review","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2023-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89737340","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yuning Wang, I. Azimi, M. Feli, A. Rahmani, P. Liljeberg
Internet-of-Things-based systems have recently emerged, enabling long-term health monitoring systems for the daily activities of individuals. The data collected from such systems are multivariate and longitudinal, which call for tailored analysis techniques to extract the trends and abnormalities in the monitoring. Different methods in the literature have been proposed to identify trends in data. However, they do not include the time dependency and cannot distinguish changes in long-term health data. Moreover, their evaluations are limited to lab settings or short-term analysis. Long-term health monitoring applications require a modeling technique to merge the multisensory data into a meaningful indicator. In this paper, we propose a personalized neural network method to track changes and abnormalities in multivariate health data. Our proposed method leverages convolutional and graph attention layers to produce personalized scores indicating the abnormality level (i.e., deviations from the baseline) of users' data throughout the monitoring. We implement and evaluate the proposed method via a case study on long-term maternal health monitoring. Sleep and stress of pregnant women are remotely monitored using a smartwatch and a mobile application during pregnancy and 3-months postpartum. Our analysis includes 46 women. We build personalized sleep and stress models for each individual using the data from the beginning of the monitoring. Then, we compare the two groups by measuring the data variations. The abnormality scores produced by the proposed method are compared with the findings from the self-report questionnaire data collected in the monitoring and abnormality scores generated by an autoencoder method. The proposed method outperforms the baseline methods in exploring the changes between high-risk and low-risk pregnancy groups. The proposed method's scores also show correlations with the self-report data. Consequently, the results indicate that the proposed method effectively detects the abnormality in multivariate long-term health monitoring.
{"title":"Personalized Graph Attention Network for Multivariate Time-series Change Analysis: A Case Study on Long-term Maternal Monitoring","authors":"Yuning Wang, I. Azimi, M. Feli, A. Rahmani, P. Liljeberg","doi":"10.1145/3555776.3577675","DOIUrl":"https://doi.org/10.1145/3555776.3577675","url":null,"abstract":"Internet-of-Things-based systems have recently emerged, enabling long-term health monitoring systems for the daily activities of individuals. The data collected from such systems are multivariate and longitudinal, which call for tailored analysis techniques to extract the trends and abnormalities in the monitoring. Different methods in the literature have been proposed to identify trends in data. However, they do not include the time dependency and cannot distinguish changes in long-term health data. Moreover, their evaluations are limited to lab settings or short-term analysis. Long-term health monitoring applications require a modeling technique to merge the multisensory data into a meaningful indicator. In this paper, we propose a personalized neural network method to track changes and abnormalities in multivariate health data. Our proposed method leverages convolutional and graph attention layers to produce personalized scores indicating the abnormality level (i.e., deviations from the baseline) of users' data throughout the monitoring. We implement and evaluate the proposed method via a case study on long-term maternal health monitoring. Sleep and stress of pregnant women are remotely monitored using a smartwatch and a mobile application during pregnancy and 3-months postpartum. Our analysis includes 46 women. We build personalized sleep and stress models for each individual using the data from the beginning of the monitoring. Then, we compare the two groups by measuring the data variations. The abnormality scores produced by the proposed method are compared with the findings from the self-report questionnaire data collected in the monitoring and abnormality scores generated by an autoencoder method. The proposed method outperforms the baseline methods in exploring the changes between high-risk and low-risk pregnancy groups. The proposed method's scores also show correlations with the self-report data. Consequently, the results indicate that the proposed method effectively detects the abnormality in multivariate long-term health monitoring.","PeriodicalId":42971,"journal":{"name":"Applied Computing Review","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2023-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85777674","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The application of deep learning-based (DL) network intrusion detection systems (NIDS) enables effective automated detection of cyberattacks. Such models can extract valuable features from high-dimensional and heterogeneous network traffic with minimal feature engineering and provide high accuracy detection rates. However, it has been shown that DL can be vulnerable to adversarial examples (AEs), which mislead classification decisions at inference time, and several works have shown that AEs are indeed a threat against DL-based NIDS. In this work, we argue that these threats are not necessarily realistic. Indeed, some general techniques used to generate AE manipulate features in a way that would be inconsistent with actual network traffic. In this paper, we first implement the main AE attacks selected from the literature (FGSM, BIM, PGD, NewtonFool, CW, DeepFool, EN, Boundary, HSJ, ZOO) for two different datasets (WSN-DS and BoT-IoT) and we compare their relative performance. We then analyze the perturbation generated by these attacks and use the metrics to establish a notion of "attack unrealism". We conclude that, for these datasets, some of these attacks are performant but not realistic.
{"title":"Realism versus Performance for Adversarial Examples Against DL-based NIDS","authors":"Huda Ali Alatwi, C. Morisset","doi":"10.1145/3555776.3577671","DOIUrl":"https://doi.org/10.1145/3555776.3577671","url":null,"abstract":"The application of deep learning-based (DL) network intrusion detection systems (NIDS) enables effective automated detection of cyberattacks. Such models can extract valuable features from high-dimensional and heterogeneous network traffic with minimal feature engineering and provide high accuracy detection rates. However, it has been shown that DL can be vulnerable to adversarial examples (AEs), which mislead classification decisions at inference time, and several works have shown that AEs are indeed a threat against DL-based NIDS. In this work, we argue that these threats are not necessarily realistic. Indeed, some general techniques used to generate AE manipulate features in a way that would be inconsistent with actual network traffic. In this paper, we first implement the main AE attacks selected from the literature (FGSM, BIM, PGD, NewtonFool, CW, DeepFool, EN, Boundary, HSJ, ZOO) for two different datasets (WSN-DS and BoT-IoT) and we compare their relative performance. We then analyze the perturbation generated by these attacks and use the metrics to establish a notion of \"attack unrealism\". We conclude that, for these datasets, some of these attacks are performant but not realistic.","PeriodicalId":42971,"journal":{"name":"Applied Computing Review","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2023-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86228950","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nowadays, more and more smart cities around the world are being built. As a part of the smart city, intelligent public transportation plays a very important role. Improving the quality of public transportation by reducing crowdedness and total transit time is a critical issue. To this end, we propose a bus operation prediction model based on deep learning techniques, and use this model to dynamically adjust the bus departure time to improve the bus service quality. Specifically, we first combine bus fare card data and open data, such as weather conditions and traffic accidents, to build models for predicting the number of passengers who board/alight the bus at a stop, the boarding and alighting time, and the bus running time between stops. Then we combine these models to predict the operation of the bus for deciding the best bus departure time within the bus departure interval. Experimental results on real-world data of Taichung City bus route #300 show that our approach to deciding the bus departure time is effective for improving its service quality.
{"title":"Improving the Quality of Public Transportation by Dynamically Adjusting the Bus Departure Time","authors":"Shuheng Cao, S. Thamrin, Arbee L. P. Chen","doi":"10.1145/3555776.3577596","DOIUrl":"https://doi.org/10.1145/3555776.3577596","url":null,"abstract":"Nowadays, more and more smart cities around the world are being built. As a part of the smart city, intelligent public transportation plays a very important role. Improving the quality of public transportation by reducing crowdedness and total transit time is a critical issue. To this end, we propose a bus operation prediction model based on deep learning techniques, and use this model to dynamically adjust the bus departure time to improve the bus service quality. Specifically, we first combine bus fare card data and open data, such as weather conditions and traffic accidents, to build models for predicting the number of passengers who board/alight the bus at a stop, the boarding and alighting time, and the bus running time between stops. Then we combine these models to predict the operation of the bus for deciding the best bus departure time within the bus departure interval. Experimental results on real-world data of Taichung City bus route #300 show that our approach to deciding the bus departure time is effective for improving its service quality.","PeriodicalId":42971,"journal":{"name":"Applied Computing Review","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2023-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86297252","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mahyar Tourchi Moghaddam, Andreas Edal Pedersen, William Walter Lillebroe Bolding, T. Worm
The Single Sign-On (SSO) method eases the authentication and authorization process. The solution substantially impacts the users' experience since they only need to authenticate once to access multiple services without re-authenticating. This paper adopts an incremental prototyping approach to develop an SSO system. The research reveals that while SSO improves users' quality of experience, it could imply performance and security issues if traditional architectures are adopted. Thus, a Microservices-based approach with containerization is subsequently proposed to overcome SSO's quality issues in practice. The SSO system is containerized using Docker and managed using Docker Compose. The results show a significant performance and security improvement.
{"title":"A Performant and Secure Single Sign-On System Using Microservices","authors":"Mahyar Tourchi Moghaddam, Andreas Edal Pedersen, William Walter Lillebroe Bolding, T. Worm","doi":"10.1145/3555776.3577869","DOIUrl":"https://doi.org/10.1145/3555776.3577869","url":null,"abstract":"The Single Sign-On (SSO) method eases the authentication and authorization process. The solution substantially impacts the users' experience since they only need to authenticate once to access multiple services without re-authenticating. This paper adopts an incremental prototyping approach to develop an SSO system. The research reveals that while SSO improves users' quality of experience, it could imply performance and security issues if traditional architectures are adopted. Thus, a Microservices-based approach with containerization is subsequently proposed to overcome SSO's quality issues in practice. The SSO system is containerized using Docker and managed using Docker Compose. The results show a significant performance and security improvement.","PeriodicalId":42971,"journal":{"name":"Applied Computing Review","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2023-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87396157","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Crystalline materials, such as metals and semiconductors, nearly always contain a special defect type called dislocation. This defect decisively determines many important material properties, e.g., strength, fracture toughness, or ductility. Over the past years, significant effort has been put into understanding dislocation behavior across different length scales via experimental characterization techniques and simulations. This paper introduces the dislocation ontology (DISO), which defines the concepts and relationships related to linear defects in crystalline materials. We developed DISO using a top-down approach in which we start defining the most general concepts in the dislocation domain and subsequent specialization of them. DISO is published through a persistent URL following W3C best practices for publishing Linked Data. Two potential use cases for DISO are presented to illustrate its usefulness in the dislocation dynamics domain. The evaluation of the ontology is performed in two directions, evaluating the success of the ontology in modeling a real-world domain and the richness of the ontology.
{"title":"DISO: A Domain Ontology for Modeling Dislocations in Crystalline Materials","authors":"Ahmad Zainul Ihsan, S. Fathalla, S. Sandfeld","doi":"10.1145/3555776.3578739","DOIUrl":"https://doi.org/10.1145/3555776.3578739","url":null,"abstract":"Crystalline materials, such as metals and semiconductors, nearly always contain a special defect type called dislocation. This defect decisively determines many important material properties, e.g., strength, fracture toughness, or ductility. Over the past years, significant effort has been put into understanding dislocation behavior across different length scales via experimental characterization techniques and simulations. This paper introduces the dislocation ontology (DISO), which defines the concepts and relationships related to linear defects in crystalline materials. We developed DISO using a top-down approach in which we start defining the most general concepts in the dislocation domain and subsequent specialization of them. DISO is published through a persistent URL following W3C best practices for publishing Linked Data. Two potential use cases for DISO are presented to illustrate its usefulness in the dislocation dynamics domain. The evaluation of the ontology is performed in two directions, evaluating the success of the ontology in modeling a real-world domain and the richness of the ontology.","PeriodicalId":42971,"journal":{"name":"Applied Computing Review","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2023-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85565649","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mateusz Gniewkowski, H. Maciejewski, T. Surmacz, Wiktor Walentynowicz
In this paper, we show how methods known from Natural Language Processing (NLP) can be used to detect anomalies in HTTP requests and malicious URLs. Most of the current solutions focusing on a similar problem are either rule-based or trained using manually selected features. Modern NLP methods, however, have great potential in capturing a deep understanding of samples and therefore improving the classification results. Other methods, which rely on a similar idea, often ignore the interpretability of the results, which is so important in machine learning. We are trying to fill this gap. In addition, we show to what extent the proposed solutions are resistant to concept drift. In our work, we compare three different vectorization methods: simple BoW, fastText, and the current state-of-the-art language model RoBERTa. The obtained vectors are later used in the classification task. In order to explain our results, we utilize the SHAP method. We evaluate the feasibility of our methods on four different datasets: CSIC2010, UNSW-NB15, MALICIOUSURL, and ISCX-URL2016. The first two are related to HTTP traffic, the other two contain malicious URLs. The results we show are comparable to others or better, and most importantly - interpretable.
{"title":"Sec2vec: Anomaly Detection in HTTP Traffic and Malicious URLs","authors":"Mateusz Gniewkowski, H. Maciejewski, T. Surmacz, Wiktor Walentynowicz","doi":"10.1145/3555776.3577663","DOIUrl":"https://doi.org/10.1145/3555776.3577663","url":null,"abstract":"In this paper, we show how methods known from Natural Language Processing (NLP) can be used to detect anomalies in HTTP requests and malicious URLs. Most of the current solutions focusing on a similar problem are either rule-based or trained using manually selected features. Modern NLP methods, however, have great potential in capturing a deep understanding of samples and therefore improving the classification results. Other methods, which rely on a similar idea, often ignore the interpretability of the results, which is so important in machine learning. We are trying to fill this gap. In addition, we show to what extent the proposed solutions are resistant to concept drift. In our work, we compare three different vectorization methods: simple BoW, fastText, and the current state-of-the-art language model RoBERTa. The obtained vectors are later used in the classification task. In order to explain our results, we utilize the SHAP method. We evaluate the feasibility of our methods on four different datasets: CSIC2010, UNSW-NB15, MALICIOUSURL, and ISCX-URL2016. The first two are related to HTTP traffic, the other two contain malicious URLs. The results we show are comparable to others or better, and most importantly - interpretable.","PeriodicalId":42971,"journal":{"name":"Applied Computing Review","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2023-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79851654","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Growing technologies like virtualization and artificial intelligence have become more popular on mobile devices. But lack of resources faced for processing these applications is still major hurdle. Collaborative edge and cloud computing are one of the solutions to this problem. We have proposed a multi-period deep deterministic policy gradient (MP-DDPG) algorithm to find an optimal offloading policy by partitioning the task and offloading it to the collaborative cloud and edge network to reduce energy consumption. Our results show that MP-DDPG achieves the minimum latency and energy consumption in the collaborative cloud network.
{"title":"MP-DDPG: Optimal Latency-Energy Dynamic Offloading Scheme in Collaborative Cloud Networks","authors":"Jui Mhatre, Ahyoung Lee","doi":"10.1145/3555776.3577767","DOIUrl":"https://doi.org/10.1145/3555776.3577767","url":null,"abstract":"Growing technologies like virtualization and artificial intelligence have become more popular on mobile devices. But lack of resources faced for processing these applications is still major hurdle. Collaborative edge and cloud computing are one of the solutions to this problem. We have proposed a multi-period deep deterministic policy gradient (MP-DDPG) algorithm to find an optimal offloading policy by partitioning the task and offloading it to the collaborative cloud and edge network to reduce energy consumption. Our results show that MP-DDPG achieves the minimum latency and energy consumption in the collaborative cloud network.","PeriodicalId":42971,"journal":{"name":"Applied Computing Review","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2023-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80246251","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Thomas Haines, Johannes Müller, Iñigo Querejeta-Azurmendi
Electronic voting (e-voting) is regularly used in many countries and organizations for legally binding elections. In order to conduct such elections securely, numerous e-voting systems have been proposed over the last few decades. Notably, some of these systems were designed to provide coercion-resistance. This property protects against potential adversaries trying to swing an election by coercing voters. Despite the multitude of existing coercion-resistant e-voting systems, to date, only few of them can handle large-scale Internet elections efficiently. One of these systems, VoteAgain (USENIX Security 2020), was originally claimed secure under similar trust assumptions to state-of-the-art e-voting systems without coercion-resistance. In this work, we review VoteAgain's security properties. We discover that, unlike originally claimed, VoteAgain is no more secure than a trivial voting system with a completely trusted election authority. In order to mitigate this issue, we propose a variant of VoteAgain which effectively mitigates trust on the election authorities and, at the same time, preserves VoteAgain's usability and efficiency. Altogether, our findings bring the state of science one step closer to the goal of scalable coercion-resistant e-voting being secure under reasonable trust assumptions.
{"title":"Scalable Coercion-Resistant E-Voting under Weaker Trust Assumptions","authors":"Thomas Haines, Johannes Müller, Iñigo Querejeta-Azurmendi","doi":"10.1145/3555776.3578730","DOIUrl":"https://doi.org/10.1145/3555776.3578730","url":null,"abstract":"Electronic voting (e-voting) is regularly used in many countries and organizations for legally binding elections. In order to conduct such elections securely, numerous e-voting systems have been proposed over the last few decades. Notably, some of these systems were designed to provide coercion-resistance. This property protects against potential adversaries trying to swing an election by coercing voters. Despite the multitude of existing coercion-resistant e-voting systems, to date, only few of them can handle large-scale Internet elections efficiently. One of these systems, VoteAgain (USENIX Security 2020), was originally claimed secure under similar trust assumptions to state-of-the-art e-voting systems without coercion-resistance. In this work, we review VoteAgain's security properties. We discover that, unlike originally claimed, VoteAgain is no more secure than a trivial voting system with a completely trusted election authority. In order to mitigate this issue, we propose a variant of VoteAgain which effectively mitigates trust on the election authorities and, at the same time, preserves VoteAgain's usability and efficiency. Altogether, our findings bring the state of science one step closer to the goal of scalable coercion-resistant e-voting being secure under reasonable trust assumptions.","PeriodicalId":42971,"journal":{"name":"Applied Computing Review","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2023-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89267605","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Smart contracts are programs that are executed on the blockchain and can hold, manage and transfer assets in the form of cryptocurrencies. The contract's execution is then performed on-chain and is subject to consensus, i.e. every node on the blockchain network has to run the function calls and keep track of their side-effects including updates to the balances and contract's storage. The notion of gas is introduced in most programmable blockchains, which prevents DoS attacks from malicious parties who might try to slow down the network by performing time-consuming and resource-heavy computations. While the gas idea has largely succeeded in its goal of avoiding DoS attacks, the resulting fees are extremely high. For example, in June-September 2022, on Ethereum alone, there has been an average total gas usage of 2,706.8 ETH ≈ 3,938,749 USD per day. We propose a protocol for alleviating these costs by moving most of the computation off-chain while preserving enough data on-chain to guarantee an implicit consensus about the contract state and ownership of funds in case of dishonest parties. We perform extensive experiments over 3,330 real-world Solidity contracts that were involved in 327,132 transactions in June-September 2022 on Ethereum and show that our approach reduces their gas usage by 40.09 percent, which amounts to a whopping 442,651 USD.
{"title":"Alleviating High Gas Costs by Secure and Trustless Off-chain Execution of Smart Contracts","authors":"Soroush Farokhnia, Amir Kafshdar Goharshady","doi":"10.1145/3555776.3577833","DOIUrl":"https://doi.org/10.1145/3555776.3577833","url":null,"abstract":"Smart contracts are programs that are executed on the blockchain and can hold, manage and transfer assets in the form of cryptocurrencies. The contract's execution is then performed on-chain and is subject to consensus, i.e. every node on the blockchain network has to run the function calls and keep track of their side-effects including updates to the balances and contract's storage. The notion of gas is introduced in most programmable blockchains, which prevents DoS attacks from malicious parties who might try to slow down the network by performing time-consuming and resource-heavy computations. While the gas idea has largely succeeded in its goal of avoiding DoS attacks, the resulting fees are extremely high. For example, in June-September 2022, on Ethereum alone, there has been an average total gas usage of 2,706.8 ETH ≈ 3,938,749 USD per day. We propose a protocol for alleviating these costs by moving most of the computation off-chain while preserving enough data on-chain to guarantee an implicit consensus about the contract state and ownership of funds in case of dishonest parties. We perform extensive experiments over 3,330 real-world Solidity contracts that were involved in 327,132 transactions in June-September 2022 on Ethereum and show that our approach reduces their gas usage by 40.09 percent, which amounts to a whopping 442,651 USD.","PeriodicalId":42971,"journal":{"name":"Applied Computing Review","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2023-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77813297","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}