Pub Date : 2019-06-01DOI: 10.1109/ICDIS.2019.00038
Mohammad Zoynul Abedin, Guotai Chi, F. Moula
The current study proposes a novel ensemble approach rooted in the weighted synthetic minority over-sampling technique (WSMOTE) algorithm being called WSMOTE-ensemble for skewed loan performance data modeling. The proposed ensemble classifier hybridizes WSMOTE and Bagging with sampling composite mixtures (SCMs) to minimize the class skewed constraints linking to the positive and negative small business instances. It increases the multiplicity of executed algorithms as different sampling composite mixtures are applied to form diverse training sets. Based on the fitted evaluation measures, finally this study recommends that the 'WSMOTE-ensemblek-NN' methodology generating from the WSMOTE-decision tree-bagging with k nearest neighbor is the best fusion sampling strategy which is a novel finding in this domain.
{"title":"Weighted SMOTE-Ensemble Algorithms: Evidence from Chinese Imbalance Credit Approval Instances","authors":"Mohammad Zoynul Abedin, Guotai Chi, F. Moula","doi":"10.1109/ICDIS.2019.00038","DOIUrl":"https://doi.org/10.1109/ICDIS.2019.00038","url":null,"abstract":"The current study proposes a novel ensemble approach rooted in the weighted synthetic minority over-sampling technique (WSMOTE) algorithm being called WSMOTE-ensemble for skewed loan performance data modeling. The proposed ensemble classifier hybridizes WSMOTE and Bagging with sampling composite mixtures (SCMs) to minimize the class skewed constraints linking to the positive and negative small business instances. It increases the multiplicity of executed algorithms as different sampling composite mixtures are applied to form diverse training sets. Based on the fitted evaluation measures, finally this study recommends that the 'WSMOTE-ensemblek-NN' methodology generating from the WSMOTE-decision tree-bagging with k nearest neighbor is the best fusion sampling strategy which is a novel finding in this domain.","PeriodicalId":181673,"journal":{"name":"2019 2nd International Conference on Data Intelligence and Security (ICDIS)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127650063","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-06-01DOI: 10.1109/ICDIS.2019.00026
Sultan Almakdi, B. Panda
Cloud computing is an attractive environment for both organizations and individual users, as it provides scalable computing and storage services at an affordable price. However, privacy and confidentiality are two challenges that trouble most users. Data encryption, using a powerful encryption algorithm such as the Advanced Encryption Standard (AES), is one solution that can allay users' concerns, but other challenges with searching over encrypted data have arisen. Researchers have proposed many different schemes to execute Standard Query Language (SQL) queries over encrypted data by encrypting the data with more than one encryption algorithm. However, other researchers have proposed systems based on the fragmentation of encrypted data. In this paper, we propose bit vector-based model (BVM), a secure database system that works as an intermediary between users and the cloud provider. In BVM, before the encryption and outsourcing processes, the query manager (QM) takes each record from the main table, parses it, builds a bit vector for it, and stores it. The BV stores bits, zero and one, and its length equals the total number of sub-columns for all sensitive columns. BVM aims to reduce the range of retrieved encrypted records that are related to a user's query from the cloud. In our model, the cloud provider cannot deduce information from the encrypted data nor can infer which encryption algorithm was used to encrypt data. We implement BVM and run different experiments to compare our model with the methods in which data are not encrypted in the cloud. Our evaluation shows that BVM reduces the range of the retrieved encrypted records from the cloud to less than 35 percent of encrypted records. As a result, our model avoids unnecessary decryption processes that affect delay times.
{"title":"Secure and Efficient Query Processing Technique for Encrypted Databases in Cloud","authors":"Sultan Almakdi, B. Panda","doi":"10.1109/ICDIS.2019.00026","DOIUrl":"https://doi.org/10.1109/ICDIS.2019.00026","url":null,"abstract":"Cloud computing is an attractive environment for both organizations and individual users, as it provides scalable computing and storage services at an affordable price. However, privacy and confidentiality are two challenges that trouble most users. Data encryption, using a powerful encryption algorithm such as the Advanced Encryption Standard (AES), is one solution that can allay users' concerns, but other challenges with searching over encrypted data have arisen. Researchers have proposed many different schemes to execute Standard Query Language (SQL) queries over encrypted data by encrypting the data with more than one encryption algorithm. However, other researchers have proposed systems based on the fragmentation of encrypted data. In this paper, we propose bit vector-based model (BVM), a secure database system that works as an intermediary between users and the cloud provider. In BVM, before the encryption and outsourcing processes, the query manager (QM) takes each record from the main table, parses it, builds a bit vector for it, and stores it. The BV stores bits, zero and one, and its length equals the total number of sub-columns for all sensitive columns. BVM aims to reduce the range of retrieved encrypted records that are related to a user's query from the cloud. In our model, the cloud provider cannot deduce information from the encrypted data nor can infer which encryption algorithm was used to encrypt data. We implement BVM and run different experiments to compare our model with the methods in which data are not encrypted in the cloud. Our evaluation shows that BVM reduces the range of the retrieved encrypted records from the cloud to less than 35 percent of encrypted records. As a result, our model avoids unnecessary decryption processes that affect delay times.","PeriodicalId":181673,"journal":{"name":"2019 2nd International Conference on Data Intelligence and Security (ICDIS)","volume":"208 1-2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120929669","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-06-01DOI: 10.1109/ICDIS.2019.00019
P. Devlin, Matt Halom, I. Ahmad
An important use of facial recognition is the Take Me Home project. In this project, people with disabilities (PWD) are voluntarily registered so that law enforcement officers can identify them and bring them home safely when they are lost. In an application like Take me Home, optimization of person recognition is of prime importance. While facial recognition models have seen huge performance gains in recent years through improvements to the training process, we show that accuracy can be improved by combining models trained for different recognition objectives. Specifically, we find that the accuracy of facial recognition model is higher when its output is fused with the output of model trained to recognize specific attributes such as hair color, age, lighting, and picture quality. The fusion is performed with a linear regression that can be applied to countless other machine learning tasks. The main contribution of our methodology is the mathematical formulation and a neural network using the Inception Net architecture that enables the recognition of the person using up to 40 attributes. In addition, we designed a framework that uses a joint linear regression scheme to combine the facial feature vectors produced by the facial recognition module and the attribute vectors produced by the attribute recognition module. The result is an efficient solution in which a lost person is more accurately identified by police officers even under unideal conditions.
{"title":"Applying a Novel Feature Set Fusion Technique to Facial Recognition","authors":"P. Devlin, Matt Halom, I. Ahmad","doi":"10.1109/ICDIS.2019.00019","DOIUrl":"https://doi.org/10.1109/ICDIS.2019.00019","url":null,"abstract":"An important use of facial recognition is the Take Me Home project. In this project, people with disabilities (PWD) are voluntarily registered so that law enforcement officers can identify them and bring them home safely when they are lost. In an application like Take me Home, optimization of person recognition is of prime importance. While facial recognition models have seen huge performance gains in recent years through improvements to the training process, we show that accuracy can be improved by combining models trained for different recognition objectives. Specifically, we find that the accuracy of facial recognition model is higher when its output is fused with the output of model trained to recognize specific attributes such as hair color, age, lighting, and picture quality. The fusion is performed with a linear regression that can be applied to countless other machine learning tasks. The main contribution of our methodology is the mathematical formulation and a neural network using the Inception Net architecture that enables the recognition of the person using up to 40 attributes. In addition, we designed a framework that uses a joint linear regression scheme to combine the facial feature vectors produced by the facial recognition module and the attribute vectors produced by the attribute recognition module. The result is an efficient solution in which a lost person is more accurately identified by police officers even under unideal conditions.","PeriodicalId":181673,"journal":{"name":"2019 2nd International Conference on Data Intelligence and Security (ICDIS)","volume":"338 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116260993","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-06-01DOI: 10.1109/ICDIS.2019.00023
Yongle Zhang, Lin Qi, Junyu Dong, Qi Wen, Mingdong Lv
This paper presents an effective method for measuring wave height and period using a low-precision IMU device integrated with a three-axis accelerometer and a three-axis gyroscope. Firstly, the acceleration of the three-axis sensor coordinate system is transformed to the acceleration in the vertical direction under the geographic coordinate system by Euler angle. Then the noise of the resultant acceleration (vertical direction) signal is removed by the smoothing process of the anisotropic diffusion based on the partial differential equation. Furthermore, the method of frequency domain integration is adopted to overcome the deviation caused by the quadratic integral obtaining an accurate wave height and period. Finally, the experimental comparison results show that the practicality of the proposed method.
{"title":"Data Processing Based on Low-Precision IMU Equipment to Predict Wave Height and Wave Period","authors":"Yongle Zhang, Lin Qi, Junyu Dong, Qi Wen, Mingdong Lv","doi":"10.1109/ICDIS.2019.00023","DOIUrl":"https://doi.org/10.1109/ICDIS.2019.00023","url":null,"abstract":"This paper presents an effective method for measuring wave height and period using a low-precision IMU device integrated with a three-axis accelerometer and a three-axis gyroscope. Firstly, the acceleration of the three-axis sensor coordinate system is transformed to the acceleration in the vertical direction under the geographic coordinate system by Euler angle. Then the noise of the resultant acceleration (vertical direction) signal is removed by the smoothing process of the anisotropic diffusion based on the partial differential equation. Furthermore, the method of frequency domain integration is adopted to overcome the deviation caused by the quadratic integral obtaining an accurate wave height and period. Finally, the experimental comparison results show that the practicality of the proposed method.","PeriodicalId":181673,"journal":{"name":"2019 2nd International Conference on Data Intelligence and Security (ICDIS)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132140238","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-06-01DOI: 10.1109/ICDIS.2019.00032
Binyao Duan, Wenjian Luo, Hao Jiang, Li Ni
Continuous change is one of the key features of social networks, and the analysis and mining of dynamic social networks are of significant value. However, it is not easy to obtain real-world dynamic social networks. Thus, the artificial generation of dynamic social networks is very valuable. The dynamic social network generators that exist thus far usually generate social networks with specific operations, such as edge/node add/delete and community merge/split. In this paper, we describe the design of a dynamic social network generator based on modularity, called DSNG-M. DSNG-M initially takes a static social network and by flipping edges generates time-evolving social networks with the expected modularity, where the expected modularity at each time step is calculated based on the community structure of the original static social network. Thus, the generated networks and the original network have a common intrinsic structure, while the connections between nodes vary in the evolutionary process. We conducted experiments to analyze the change in the network characteristics of the generated social networks, such as the number of edges, degrees of nodes, and average distances between nodes. Experiments were also conducted to verify that the aggregation of multi-temporal social networks can reflect the community structure of the original social network and to analyze the effects of the generator's parameter on the time cost.
{"title":"Dynamic Social Networks Generator Based on Modularity: DSNG-M","authors":"Binyao Duan, Wenjian Luo, Hao Jiang, Li Ni","doi":"10.1109/ICDIS.2019.00032","DOIUrl":"https://doi.org/10.1109/ICDIS.2019.00032","url":null,"abstract":"Continuous change is one of the key features of social networks, and the analysis and mining of dynamic social networks are of significant value. However, it is not easy to obtain real-world dynamic social networks. Thus, the artificial generation of dynamic social networks is very valuable. The dynamic social network generators that exist thus far usually generate social networks with specific operations, such as edge/node add/delete and community merge/split. In this paper, we describe the design of a dynamic social network generator based on modularity, called DSNG-M. DSNG-M initially takes a static social network and by flipping edges generates time-evolving social networks with the expected modularity, where the expected modularity at each time step is calculated based on the community structure of the original static social network. Thus, the generated networks and the original network have a common intrinsic structure, while the connections between nodes vary in the evolutionary process. We conducted experiments to analyze the change in the network characteristics of the generated social networks, such as the number of edges, degrees of nodes, and average distances between nodes. Experiments were also conducted to verify that the aggregation of multi-temporal social networks can reflect the community structure of the original social network and to analyze the effects of the generator's parameter on the time cost.","PeriodicalId":181673,"journal":{"name":"2019 2nd International Conference on Data Intelligence and Security (ICDIS)","volume":"116 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122257353","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-06-01DOI: 10.1109/ICDIS.2019.00021
S. Khalid, Ishfaq Ahmad, E. KhodyarMohammad
Overwhelming energy-related costs mar data center profits. In a smart grid, the price of electricity may change with real-time demand, geographic area, and time-of-use. Data centers with flexible request dispatch and resource allocation capabilities can cooperatively avail these price variations to reduce expenditures and maximize profit. In this paper, we model the data center profit maximization as a constrained multi-objective optimization problem. Our proposed scheme optimizes data center revenue and expense objectives simultaneously and to the best of our knowledge, is the first scheme that provides trade-off solutions for use in varied operational scenarios. The approach utilizes the Strength Pareto Evolutionary Algorithm (SPEA-II) as the base framework and adapts it to devise an algorithm. Our technique finds Pareto optimal solutions for data center profit maximization problem in a smart grid environment. The simulation results prove the efficacy of the proposed technique.
{"title":"An Evolutionary Approach to Optimize Data Center Profit in Smart Grid Environment","authors":"S. Khalid, Ishfaq Ahmad, E. KhodyarMohammad","doi":"10.1109/ICDIS.2019.00021","DOIUrl":"https://doi.org/10.1109/ICDIS.2019.00021","url":null,"abstract":"Overwhelming energy-related costs mar data center profits. In a smart grid, the price of electricity may change with real-time demand, geographic area, and time-of-use. Data centers with flexible request dispatch and resource allocation capabilities can cooperatively avail these price variations to reduce expenditures and maximize profit. In this paper, we model the data center profit maximization as a constrained multi-objective optimization problem. Our proposed scheme optimizes data center revenue and expense objectives simultaneously and to the best of our knowledge, is the first scheme that provides trade-off solutions for use in varied operational scenarios. The approach utilizes the Strength Pareto Evolutionary Algorithm (SPEA-II) as the base framework and adapts it to devise an algorithm. Our technique finds Pareto optimal solutions for data center profit maximization problem in a smart grid environment. The simulation results prove the efficacy of the proposed technique.","PeriodicalId":181673,"journal":{"name":"2019 2nd International Conference on Data Intelligence and Security (ICDIS)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116405211","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-06-01DOI: 10.1109/ICDIS.2019.00024
Andrew Corum, Donovan Jenkins, Jun Zheng
PDF, as one of most popular document file format, has been frequently utilized as a vector by attackers to covey malware due to its flexible file structure and the ability to embed different kinds of content. In this paper, we propose a new learning-based method to detect PDF malware using image processing and processing techniques. The PDF files are first converted to grayscale images using image visualization techniques. Then various image features representing the distinct visual characteristics of PDF malware and benign PDF files are extracted. Finally, learning algorithms are applied to create the classification models to classify a new PDF file as malicious or benign. The performance of the proposed method was evaluated using Contagio PDF malware dataset. The results show that the proposed method is a viable solution for PDF malware detection. It is also shown that the proposed method is more robust to resist reverse mimicry attacks than the state-of-art learning-based method.
{"title":"Robust PDF Malware Detection with Image Visualization and Processing Techniques","authors":"Andrew Corum, Donovan Jenkins, Jun Zheng","doi":"10.1109/ICDIS.2019.00024","DOIUrl":"https://doi.org/10.1109/ICDIS.2019.00024","url":null,"abstract":"PDF, as one of most popular document file format, has been frequently utilized as a vector by attackers to covey malware due to its flexible file structure and the ability to embed different kinds of content. In this paper, we propose a new learning-based method to detect PDF malware using image processing and processing techniques. The PDF files are first converted to grayscale images using image visualization techniques. Then various image features representing the distinct visual characteristics of PDF malware and benign PDF files are extracted. Finally, learning algorithms are applied to create the classification models to classify a new PDF file as malicious or benign. The performance of the proposed method was evaluated using Contagio PDF malware dataset. The results show that the proposed method is a viable solution for PDF malware detection. It is also shown that the proposed method is more robust to resist reverse mimicry attacks than the state-of-art learning-based method.","PeriodicalId":181673,"journal":{"name":"2019 2nd International Conference on Data Intelligence and Security (ICDIS)","volume":"151 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117349095","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-06-01DOI: 10.1109/ICDIS.2019.00015
U. Timalsina, A. Wang
Service Oriented Architecture is a viable option for developing applications in an Internet of Things (IoT) environment. One important consideration in developing services for an IoT environment is how to incentivize service providers and consumers so that a healthy IoT marketplace can come into practice with a balanced supply and demand for services. We argue that service providers should be specifically incentivized in some form to offer quality services in an IoT environment. In this paper, we present an IoT ecosystem, where each exchange of a service between a service provider and service consumer is logged as a transaction in a distributed ledger. For service sharing, we used OSGi Remote Services implementation of the Eclipse Communication Framework. For the distributed ledger, we used Swirlds Hashgraph. Each OSGi remote service is requested by digitally signing a commitment to use the service and upon service exchange, the signature is logged as a Hashgraph transaction. A proof-of-concept prototype has been implemented with positive results.
{"title":"Incentivizing Services Sharing in IoT with OSGi and HashGraph","authors":"U. Timalsina, A. Wang","doi":"10.1109/ICDIS.2019.00015","DOIUrl":"https://doi.org/10.1109/ICDIS.2019.00015","url":null,"abstract":"Service Oriented Architecture is a viable option for developing applications in an Internet of Things (IoT) environment. One important consideration in developing services for an IoT environment is how to incentivize service providers and consumers so that a healthy IoT marketplace can come into practice with a balanced supply and demand for services. We argue that service providers should be specifically incentivized in some form to offer quality services in an IoT environment. In this paper, we present an IoT ecosystem, where each exchange of a service between a service provider and service consumer is logged as a transaction in a distributed ledger. For service sharing, we used OSGi Remote Services implementation of the Eclipse Communication Framework. For the distributed ledger, we used Swirlds Hashgraph. Each OSGi remote service is requested by digitally signing a commitment to use the service and upon service exchange, the signature is logged as a Hashgraph transaction. A proof-of-concept prototype has been implemented with positive results.","PeriodicalId":181673,"journal":{"name":"2019 2nd International Conference on Data Intelligence and Security (ICDIS)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132699754","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-06-01DOI: 10.1109/ICDIS.2019.00012
Leonard Renners, Felix Heine, Carsten Kleiner, G. Rodosek
Network security tools like Security Information and Event Management systems detect and process incidents with respect to the network and environment they occur in. Part of the analysis is used to estimate a priority for the incident to effectively assign the limited workforce on the most important events. This process is referred to as incident prioritization and it is typically based on a set of static rules and calculations. Due to shifting concepts, new network entities, different attacks or changing guidelines, the rules may contain errors, which leads to incorrectly prioritized incidents. An explicit process to even identify those problems is often amiss, let alone assistance to adjust the model. In this paper, we present an approach to adapt an incident prioritization model to correct errors in the rating process. We developed concepts to collect feedback from an analyst and automatically generate and evaluate improvements to the prioritization model. The evaluation of our approach on real and synthetic data in a comparative experiment using further, regular learning algorithms shows promising results.
{"title":"Design and Evaluation of an Approach for Feedback-Based Adaptation of Incident Prioritization","authors":"Leonard Renners, Felix Heine, Carsten Kleiner, G. Rodosek","doi":"10.1109/ICDIS.2019.00012","DOIUrl":"https://doi.org/10.1109/ICDIS.2019.00012","url":null,"abstract":"Network security tools like Security Information and Event Management systems detect and process incidents with respect to the network and environment they occur in. Part of the analysis is used to estimate a priority for the incident to effectively assign the limited workforce on the most important events. This process is referred to as incident prioritization and it is typically based on a set of static rules and calculations. Due to shifting concepts, new network entities, different attacks or changing guidelines, the rules may contain errors, which leads to incorrectly prioritized incidents. An explicit process to even identify those problems is often amiss, let alone assistance to adjust the model. In this paper, we present an approach to adapt an incident prioritization model to correct errors in the rating process. We developed concepts to collect feedback from an analyst and automatically generate and evaluate improvements to the prioritization model. The evaluation of our approach on real and synthetic data in a comparative experiment using further, regular learning algorithms shows promising results.","PeriodicalId":181673,"journal":{"name":"2019 2nd International Conference on Data Intelligence and Security (ICDIS)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115548857","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-06-01DOI: 10.1109/ICDIS.2019.00022
M. Alkalai, Wisam H. Benamer
Frequently, in epidemiological studies, it is essential to study a disease of concern through observing the records of many patients. These records are usually the property of some local clinics, medical centers or hospitals providing services within the affected areas. The records are often gathered into datasets and each encompasses detailed information about the causative agent of the epidemic diseases in a specific zone. Therefore, trading such datasets, in a way that preserve the privacy and integrity of their contents, is essential. Since, studying these datasets gives a better understanding of the nature of the diseases and eventually compose a cure. In this paper, we compare four well-known secret-key cryptographic techniques to choose the best cipher that passes different evaluations with highest marks. The selected superior cipher would then be involved in providing secure exchanging of such datasets. The experiments on Wisconsin dataset, using java implementations of the four ciphers, show that there are contrasts between the performances of these ciphers which draw a clear picture of the most suitable cipher to use.
{"title":"Secure Exchanging of Various Data Types Used for Classification Purposes","authors":"M. Alkalai, Wisam H. Benamer","doi":"10.1109/ICDIS.2019.00022","DOIUrl":"https://doi.org/10.1109/ICDIS.2019.00022","url":null,"abstract":"Frequently, in epidemiological studies, it is essential to study a disease of concern through observing the records of many patients. These records are usually the property of some local clinics, medical centers or hospitals providing services within the affected areas. The records are often gathered into datasets and each encompasses detailed information about the causative agent of the epidemic diseases in a specific zone. Therefore, trading such datasets, in a way that preserve the privacy and integrity of their contents, is essential. Since, studying these datasets gives a better understanding of the nature of the diseases and eventually compose a cure. In this paper, we compare four well-known secret-key cryptographic techniques to choose the best cipher that passes different evaluations with highest marks. The selected superior cipher would then be involved in providing secure exchanging of such datasets. The experiments on Wisconsin dataset, using java implementations of the four ciphers, show that there are contrasts between the performances of these ciphers which draw a clear picture of the most suitable cipher to use.","PeriodicalId":181673,"journal":{"name":"2019 2nd International Conference on Data Intelligence and Security (ICDIS)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115064882","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}