Pub Date : 2017-08-01DOI: 10.1109/Trustcom/BigDataSE/ICESS.2017.257
Yuqing Qiu, Qingni Shen, Yang Luo, Cong Li, Zhonghai Wu
Due to sharing physical resource, the co-residency of virtual machine (VM) in cloud is inevitable, which brings many security threats, such as side channel attacks and covert channel threats. Most of previous work focused on detecting and resisting a bewildering variety of co-resident attacks. Generally, improving the VM deployment strategy can also mitigate the security threats of co-resident attacks effectively by reducing the probability of VM co-residency. In this paper, we propose a co-residency-resistant VM deployment strategy and define four thresholds to adjust the strategy for security and load balancing. Moreover, two metrics(VM co-residency probability and user co-residency coverage probability) are introduced to evaluate the deployment strategy. Finally, we implement the strategy and run experiments on both OpenStack and CloudSim. The results show that our strategy can reduce VM co-residency by 50% to 66.7% and user co-residency by 50% to 66% compared with the existing strategies.
{"title":"A Secure Virtual Machine Deployment Strategy to Reduce Co-residency in Cloud","authors":"Yuqing Qiu, Qingni Shen, Yang Luo, Cong Li, Zhonghai Wu","doi":"10.1109/Trustcom/BigDataSE/ICESS.2017.257","DOIUrl":"https://doi.org/10.1109/Trustcom/BigDataSE/ICESS.2017.257","url":null,"abstract":"Due to sharing physical resource, the co-residency of virtual machine (VM) in cloud is inevitable, which brings many security threats, such as side channel attacks and covert channel threats. Most of previous work focused on detecting and resisting a bewildering variety of co-resident attacks. Generally, improving the VM deployment strategy can also mitigate the security threats of co-resident attacks effectively by reducing the probability of VM co-residency. In this paper, we propose a co-residency-resistant VM deployment strategy and define four thresholds to adjust the strategy for security and load balancing. Moreover, two metrics(VM co-residency probability and user co-residency coverage probability) are introduced to evaluate the deployment strategy. Finally, we implement the strategy and run experiments on both OpenStack and CloudSim. The results show that our strategy can reduce VM co-residency by 50% to 66.7% and user co-residency by 50% to 66% compared with the existing strategies.","PeriodicalId":170253,"journal":{"name":"2017 IEEE Trustcom/BigDataSE/ICESS","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125898480","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-08-01DOI: 10.1109/Trustcom/BigDataSE/ICESS.2017.333
Patricia Miquilini, R. G. Rossi, M. G. Quiles, V. V. D. Melo, M. Basgalupp
Automatic data classification is often performed by supervised learning algorithms, producing a model to classify new instances. Reflecting that labeled instances are expensive, semisupervised learning (SSL) methods prove to be an alternative to performing data classification, once the learning demands only a few labeled instances. There are many SSL algorithms, and graph-based ones have significant features. In particular, graph-based models grant to identify classes of different distributions without prior knowledge of statistical model parameters. However, a drawback that might influence their classification performance relays on the construction of the graph, which requires the measurement of distances (or similarities) between instances. Since a particular distance function can enhance the performance for some data sets and decrease to others, here, we introduce a novel approach, called GEAD, a Grammatical Evolution for Automatically designing Distance functions for Graph-based semi-supervised learning. We perform extensive experiments with 100 public data sets to assess the performance of our approach, and we compare it with traditional distance functions in the literature. Results show that GEAD is capable of designing distance functions that significantly outperform the baseline manually-designed ones regarding different predictive measures, such as Micro-F1, and Macro-F1.
{"title":"Automatically Design Distance Functions for Graph-Based Semi-Supervised Learning","authors":"Patricia Miquilini, R. G. Rossi, M. G. Quiles, V. V. D. Melo, M. Basgalupp","doi":"10.1109/Trustcom/BigDataSE/ICESS.2017.333","DOIUrl":"https://doi.org/10.1109/Trustcom/BigDataSE/ICESS.2017.333","url":null,"abstract":"Automatic data classification is often performed by supervised learning algorithms, producing a model to classify new instances. Reflecting that labeled instances are expensive, semisupervised learning (SSL) methods prove to be an alternative to performing data classification, once the learning demands only a few labeled instances. There are many SSL algorithms, and graph-based ones have significant features. In particular, graph-based models grant to identify classes of different distributions without prior knowledge of statistical model parameters. However, a drawback that might influence their classification performance relays on the construction of the graph, which requires the measurement of distances (or similarities) between instances. Since a particular distance function can enhance the performance for some data sets and decrease to others, here, we introduce a novel approach, called GEAD, a Grammatical Evolution for Automatically designing Distance functions for Graph-based semi-supervised learning. We perform extensive experiments with 100 public data sets to assess the performance of our approach, and we compare it with traditional distance functions in the literature. Results show that GEAD is capable of designing distance functions that significantly outperform the baseline manually-designed ones regarding different predictive measures, such as Micro-F1, and Macro-F1.","PeriodicalId":170253,"journal":{"name":"2017 IEEE Trustcom/BigDataSE/ICESS","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121558839","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-08-01DOI: 10.1109/Trustcom/BigDataSE/ICESS.2017.282
Manel Mrabet, Yosra Ben Saied, L. Saïdane
Trust management systems provide a means for trustworthy interactions in cloud environments. However, trust establishment could be compromised when malicious cloud users intentionally provide unfair feedbacks to decrease the reputation of some cloud providers or to benefit others. In this paper, we define "Feedback Entropy" as a newmetric to detect unfair rating attacks. As such, we propose a new detection system able to detect unfair rating attacks by monitoring users' feedbacks during short periods of time. Our proposed approach is designed to detect rapidly such attacks at the point in time they appear and to scale effectively with the increase of the number of feedbacks. Experimental results prove the advantages of the introduced metric and the good performance of the proposed detection system.
{"title":"Feedback Entropy: A New Metric to Detect Unfair Rating Attacks for Trust Computing in Cloud Environments","authors":"Manel Mrabet, Yosra Ben Saied, L. Saïdane","doi":"10.1109/Trustcom/BigDataSE/ICESS.2017.282","DOIUrl":"https://doi.org/10.1109/Trustcom/BigDataSE/ICESS.2017.282","url":null,"abstract":"Trust management systems provide a means for trustworthy interactions in cloud environments. However, trust establishment could be compromised when malicious cloud users intentionally provide unfair feedbacks to decrease the reputation of some cloud providers or to benefit others. In this paper, we define \"Feedback Entropy\" as a newmetric to detect unfair rating attacks. As such, we propose a new detection system able to detect unfair rating attacks by monitoring users' feedbacks during short periods of time. Our proposed approach is designed to detect rapidly such attacks at the point in time they appear and to scale effectively with the increase of the number of feedbacks. Experimental results prove the advantages of the introduced metric and the good performance of the proposed detection system.","PeriodicalId":170253,"journal":{"name":"2017 IEEE Trustcom/BigDataSE/ICESS","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126895480","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Secure interoperation is an important technology to protect shared data in multi-domain environments. IRBAC (Interoperable Role-based Access Control) 2000 model has been proposed to achieve security interoperation between two or more RBAC administrative domains. Static Separation of Duties (SSoD) is an important security policy in RBAC, but it has not been enforced in the IRBAC 2000 model. As a result, some previous works have studied the problem of SMER (Statically Mutually Exclusive Roles) constraints violation between two RBAC domains in the IRBAC 2000 model. However all of them do not enforce how to preserve privacy of RBAC policies, such as roles, roles hierarchies and user-role assignment while detecting SMER constraints violation, if the two interoperable domains do not want to disclose them each other and to others. In order to enforce privacy-preserving detection of SMER constraints violation, we first introduce a solution without privacy-preserving mechanism using matrix product. Then a privacy-preserving solution is proposed to securely detect SMER constraints violation without disclosing any RBAC policy based on a secure three-party protocol to matrix product computation. By efficiency analysis and experimental results comparison, the secure three-party computation protocol to matrix product based on the Paillier cryptosystem is more efficient and practical.
{"title":"Privacy-Preserving Detection of Statically Mutually Exclusive Roles Constraints Violation in Interoperable Role-Based Access Control","authors":"Meng Liu, Xuyun Zhang, Chi Yang, Shaoning Pang, Deepak Puthal, Kaijun Ren","doi":"10.1109/Trustcom/BigDataSE/ICESS.2017.277","DOIUrl":"https://doi.org/10.1109/Trustcom/BigDataSE/ICESS.2017.277","url":null,"abstract":"Secure interoperation is an important technology to protect shared data in multi-domain environments. IRBAC (Interoperable Role-based Access Control) 2000 model has been proposed to achieve security interoperation between two or more RBAC administrative domains. Static Separation of Duties (SSoD) is an important security policy in RBAC, but it has not been enforced in the IRBAC 2000 model. As a result, some previous works have studied the problem of SMER (Statically Mutually Exclusive Roles) constraints violation between two RBAC domains in the IRBAC 2000 model. However all of them do not enforce how to preserve privacy of RBAC policies, such as roles, roles hierarchies and user-role assignment while detecting SMER constraints violation, if the two interoperable domains do not want to disclose them each other and to others. In order to enforce privacy-preserving detection of SMER constraints violation, we first introduce a solution without privacy-preserving mechanism using matrix product. Then a privacy-preserving solution is proposed to securely detect SMER constraints violation without disclosing any RBAC policy based on a secure three-party protocol to matrix product computation. By efficiency analysis and experimental results comparison, the secure three-party computation protocol to matrix product based on the Paillier cryptosystem is more efficient and practical.","PeriodicalId":170253,"journal":{"name":"2017 IEEE Trustcom/BigDataSE/ICESS","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123126693","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-08-01DOI: 10.1109/Trustcom/BigDataSE/ICESS.2017.318
Nathanael R. Weidler, Dane Brown, S. Mitchell, Joel Anderson, J. Williams, Austin Costley, Chase Kunz, Christopher Wilkinson, Remy Wehbe, Ryan M. Gerdes
Microcontrollers are found in many everyday devices and will only become more prevalent as the Internet of Things (IoT) gains momentum. As such, it is increasingly important that they are reasonably secure from known vulnerabilities. If we do not improve the security posture of these devices, then attackers will find ways to exploit vulnerabilities for their own gain. Due to the security protections in modern systems which prevent execution of injected shellcode, Return Oriented Programming (ROP) has emerged as a more reliable way to execute malicious code following such attacks. ROP is a method used to take over the execution of a program by causing the return address of a function to be modified through an exploit vector, then returning to small segments of otherwise innocuous code located in executable memory one after the other to carry out the attacker’s aims. It will be shown that the Tiva TM4C123GH6PM microcontroller, which utilizes a Cortex-M4F processor, can be fully controlled with this technique. Sufficient code is pre-loaded into a ROM on Tiva microcontrollers to erase and rewrite the flash memory where the program resides. Then, that same ROM is searched for a Turing-complete gadget set which would allow for arbitrary execution. This would allow an attacker to re-purpose the microcontroller, altering the original functionality to his own malicious end.
{"title":"Return-Oriented Programming on a Cortex-M Processor","authors":"Nathanael R. Weidler, Dane Brown, S. Mitchell, Joel Anderson, J. Williams, Austin Costley, Chase Kunz, Christopher Wilkinson, Remy Wehbe, Ryan M. Gerdes","doi":"10.1109/Trustcom/BigDataSE/ICESS.2017.318","DOIUrl":"https://doi.org/10.1109/Trustcom/BigDataSE/ICESS.2017.318","url":null,"abstract":"Microcontrollers are found in many everyday devices and will only become more prevalent as the Internet of Things (IoT) gains momentum. As such, it is increasingly important that they are reasonably secure from known vulnerabilities. If we do not improve the security posture of these devices, then attackers will find ways to exploit vulnerabilities for their own gain. Due to the security protections in modern systems which prevent execution of injected shellcode, Return Oriented Programming (ROP) has emerged as a more reliable way to execute malicious code following such attacks. ROP is a method used to take over the execution of a program by causing the return address of a function to be modified through an exploit vector, then returning to small segments of otherwise innocuous code located in executable memory one after the other to carry out the attacker’s aims. It will be shown that the Tiva TM4C123GH6PM microcontroller, which utilizes a Cortex-M4F processor, can be fully controlled with this technique. Sufficient code is pre-loaded into a ROM on Tiva microcontrollers to erase and rewrite the flash memory where the program resides. Then, that same ROM is searched for a Turing-complete gadget set which would allow for arbitrary execution. This would allow an attacker to re-purpose the microcontroller, altering the original functionality to his own malicious end.","PeriodicalId":170253,"journal":{"name":"2017 IEEE Trustcom/BigDataSE/ICESS","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117336899","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-08-01DOI: 10.1109/Trustcom/BigDataSE/ICESS.2017.296
J. McDonald, Ramya Manikyam, W. Glisson, T. Andel, Y. Gu
Digital forensic investigators today are faced with numerous problems when recovering footprints of criminal activity that involve the use of computer systems. Investigators need the ability to recover evidence in a forensically sound manner, even when criminals actively work to alter the integrity, veracity, and provenance of data, applications and software that are used to support illicit activities. In many ways, operating systems (OS) can be strengthened from a technological viewpoint to support verifiable, accurate, and consistent recovery of system data when needed for forensic collection efforts. In this paper, we extend the ideas for forensic-friendly OS design by proposing the use of a practical form of computing on encrypted data (CED) and computing with encrypted functions (CEF) which builds upon prior work on component encryption (in circuits) and white-box cryptography (in software). We conduct experiments on sample programs to provide analysis of the approach based on security and efficiency, illustrating how component encryption can strengthen key OS functions and improve tamper-resistance to anti-forensic activities. We analyze the tradeoff space for use of the algorithm in a holistic approach that provides additional security and comparable properties to fully homomorphic encryption (FHE).
{"title":"Enhanced Operating System Protection to Support Digital Forensic Investigations","authors":"J. McDonald, Ramya Manikyam, W. Glisson, T. Andel, Y. Gu","doi":"10.1109/Trustcom/BigDataSE/ICESS.2017.296","DOIUrl":"https://doi.org/10.1109/Trustcom/BigDataSE/ICESS.2017.296","url":null,"abstract":"Digital forensic investigators today are faced with numerous problems when recovering footprints of criminal activity that involve the use of computer systems. Investigators need the ability to recover evidence in a forensically sound manner, even when criminals actively work to alter the integrity, veracity, and provenance of data, applications and software that are used to support illicit activities. In many ways, operating systems (OS) can be strengthened from a technological viewpoint to support verifiable, accurate, and consistent recovery of system data when needed for forensic collection efforts. In this paper, we extend the ideas for forensic-friendly OS design by proposing the use of a practical form of computing on encrypted data (CED) and computing with encrypted functions (CEF) which builds upon prior work on component encryption (in circuits) and white-box cryptography (in software). We conduct experiments on sample programs to provide analysis of the approach based on security and efficiency, illustrating how component encryption can strengthen key OS functions and improve tamper-resistance to anti-forensic activities. We analyze the tradeoff space for use of the algorithm in a holistic approach that provides additional security and comparable properties to fully homomorphic encryption (FHE).","PeriodicalId":170253,"journal":{"name":"2017 IEEE Trustcom/BigDataSE/ICESS","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129473193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-08-01DOI: 10.1109/Trustcom/BigDataSE/ICESS.2017.311
Long Cheng, Kai Huang, Gang Chen, Biao Hu, A. Knoll
Nowadays, many embedded systems consist of a mix of control applications and soft real-time tasks. This paper studies how to ensure the worst-case quality of control for control applications under disturbances while providing maximal resource to soft real-time tasks. To solve this problem, we propose a mixed-criticality control system model in which the tasks can switch between two operating modes, LO and HI, according to controlled plant states. In HI mode, the worst-case qualities of control to plants are guaranteed, while in LO mode, system resources are balanced between two classes of tasks. We compare our approach with other two approaches in the literature. Case study results demonstrate the effectiveness of our system model.
{"title":"Mixed-Criticality Control System with Performance and Robustness Guarantees","authors":"Long Cheng, Kai Huang, Gang Chen, Biao Hu, A. Knoll","doi":"10.1109/Trustcom/BigDataSE/ICESS.2017.311","DOIUrl":"https://doi.org/10.1109/Trustcom/BigDataSE/ICESS.2017.311","url":null,"abstract":"Nowadays, many embedded systems consist of a mix of control applications and soft real-time tasks. This paper studies how to ensure the worst-case quality of control for control applications under disturbances while providing maximal resource to soft real-time tasks. To solve this problem, we propose a mixed-criticality control system model in which the tasks can switch between two operating modes, LO and HI, according to controlled plant states. In HI mode, the worst-case qualities of control to plants are guaranteed, while in LO mode, system resources are balanced between two classes of tasks. We compare our approach with other two approaches in the literature. Case study results demonstrate the effectiveness of our system model.","PeriodicalId":170253,"journal":{"name":"2017 IEEE Trustcom/BigDataSE/ICESS","volume":"144 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132761230","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-08-01DOI: 10.1109/Trustcom/BigDataSE/ICESS.2017.365
Chathuranga Rathnayaka, Aruna Jamdagni
Static analysis in malware analysis has been complex due to string searching methods. Forensic investigation of the physical memory or memory forensics provides a comprehensive analysis of malware, checking traces of malware in malware dumps that have been created while running in an operating system. In this study, we propose efficient and robust framework to analyse complex malwares by integrating both static analysis techniques and memory forensic techniques. The proposed framework has evaluated two hundred real malware samples and achieved a 90% detection rate. These results have been compared and verified with the results obtained from www.virustotal.com, which is online malware analysis tool. Additionally, we have identified the sources of many malware samples.
{"title":"An Efficient Approach for Advanced Malware Analysis Using Memory Forensic Technique","authors":"Chathuranga Rathnayaka, Aruna Jamdagni","doi":"10.1109/Trustcom/BigDataSE/ICESS.2017.365","DOIUrl":"https://doi.org/10.1109/Trustcom/BigDataSE/ICESS.2017.365","url":null,"abstract":"Static analysis in malware analysis has been complex due to string searching methods. Forensic investigation of the physical memory or memory forensics provides a comprehensive analysis of malware, checking traces of malware in malware dumps that have been created while running in an operating system. In this study, we propose efficient and robust framework to analyse complex malwares by integrating both static analysis techniques and memory forensic techniques. The proposed framework has evaluated two hundred real malware samples and achieved a 90% detection rate. These results have been compared and verified with the results obtained from www.virustotal.com, which is online malware analysis tool. Additionally, we have identified the sources of many malware samples.","PeriodicalId":170253,"journal":{"name":"2017 IEEE Trustcom/BigDataSE/ICESS","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132936940","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
High productivity embedded network software is required to run embedded systems within the Internet of Things (IoT). Tomakomai InterNETworking (TINET) is a Transmission Control Protocol/Internet Protocol (TCP/IP) protocol stack for use in embedded systems. Although TINET is a compact protocol stack, it comprises many complex source codes and is difficult to maintain, extend, and analyze. To improve scalability and configurability, this paper proposes TINET componentized with the Toyohashi Open Platform for Embedded Real-time Systems (TOPPERS) embedded component system (TINET+TECS), a component-based TCP/IP protocol stack for embedded systems. This component-based TINET offers software developers high productivity through variable network buffer sizes and the ability to add or remove TCP (or UDP) functionality. TINET+TECS utilizes a dynamic TECS component connection method to satisfy the original TINET specifications. The results of an experimental comparison between the proposed component-based and original TINETs show that the execution time and memory consumption overhead are reduced and the configurability is improved.
{"title":"TINET+TECS: Component-Based TCP/IP Protocol Stack for Embedded Systems","authors":"Takuro Yamamoto, Takuma Hara, Takuya Ishikawa, Hiroshi Oyama, H. Takada, Takuya Azumi","doi":"10.1109/Trustcom/BigDataSE/ICESS.2017.313","DOIUrl":"https://doi.org/10.1109/Trustcom/BigDataSE/ICESS.2017.313","url":null,"abstract":"High productivity embedded network software is required to run embedded systems within the Internet of Things (IoT). Tomakomai InterNETworking (TINET) is a Transmission Control Protocol/Internet Protocol (TCP/IP) protocol stack for use in embedded systems. Although TINET is a compact protocol stack, it comprises many complex source codes and is difficult to maintain, extend, and analyze. To improve scalability and configurability, this paper proposes TINET componentized with the Toyohashi Open Platform for Embedded Real-time Systems (TOPPERS) embedded component system (TINET+TECS), a component-based TCP/IP protocol stack for embedded systems. This component-based TINET offers software developers high productivity through variable network buffer sizes and the ability to add or remove TCP (or UDP) functionality. TINET+TECS utilizes a dynamic TECS component connection method to satisfy the original TINET specifications. The results of an experimental comparison between the proposed component-based and original TINETs show that the execution time and memory consumption overhead are reduced and the configurability is improved.","PeriodicalId":170253,"journal":{"name":"2017 IEEE Trustcom/BigDataSE/ICESS","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123501416","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-08-01DOI: 10.1109/Trustcom/BigDataSE/ICESS.2017.334
Arua De M. Sousa, Ana Carolina Lorena, M. Basgalupp
One of the key aspects in the successful use of kernel methods such as Support Vector Machines is the proper choice of the kernel function. While there are several well known kernel functions which can produce satisfactory results for various applications (e.g. RBF), they do not take into account specific characteristics of the data sets. Moreover, they have a set of parameters to be tuned. In this paper, we propose GEEK, a Grammatical Evolution approach for automatically Evolving Kernel functions. GEEK uses a grammar composed of simple mathematical operations extracted from known kernels and is also able to optimize some of their parameters. When combined through the Grammatical Evolution, these operations give rise to more complex kernel functions, adapted to each specific problem in a data-driven approach. The predictive results obtained by Support Vector Machines using the GEEK kernel functions were in general statistically similar to those of the standard RBF, Polynomial and Sigmoid kernel functions, which had their parameters optimized by a grid search method. Nonetheless, the GEEK kernels were able to handle more properly imbalanced classification problems, whilst the results of the standard kernel functions were biased towards the majority class.
{"title":"GEEK: Grammatical Evolution for Automatically Evolving Kernel Functions","authors":"Arua De M. Sousa, Ana Carolina Lorena, M. Basgalupp","doi":"10.1109/Trustcom/BigDataSE/ICESS.2017.334","DOIUrl":"https://doi.org/10.1109/Trustcom/BigDataSE/ICESS.2017.334","url":null,"abstract":"One of the key aspects in the successful use of kernel methods such as Support Vector Machines is the proper choice of the kernel function. While there are several well known kernel functions which can produce satisfactory results for various applications (e.g. RBF), they do not take into account specific characteristics of the data sets. Moreover, they have a set of parameters to be tuned. In this paper, we propose GEEK, a Grammatical Evolution approach for automatically Evolving Kernel functions. GEEK uses a grammar composed of simple mathematical operations extracted from known kernels and is also able to optimize some of their parameters. When combined through the Grammatical Evolution, these operations give rise to more complex kernel functions, adapted to each specific problem in a data-driven approach. The predictive results obtained by Support Vector Machines using the GEEK kernel functions were in general statistically similar to those of the standard RBF, Polynomial and Sigmoid kernel functions, which had their parameters optimized by a grid search method. Nonetheless, the GEEK kernels were able to handle more properly imbalanced classification problems, whilst the results of the standard kernel functions were biased towards the majority class.","PeriodicalId":170253,"journal":{"name":"2017 IEEE Trustcom/BigDataSE/ICESS","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124892330","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}