Pub Date : 2020-07-01DOI: 10.1109/COMPSAC48688.2020.00-90
Jingyu Sun, Susumu Takeuchi, I. Yamasaki
Nowadays, explosive growth of ontologies are used for managing data in various domains. They usually own different vocabularies and structures following different fashions. Ontology alignment finding semantic correspondences between elements of these ontologies can effectively facilitate the data communication and novel application creation in many practical scenarios. However, we noticed that, the traditional parametric ontology mapping methods still depend on individualistic abilities for setting proper parameters for mapping. When trying to utilize artificial neural networks for the automatic ontology mapping, the training data are found insufficient in most of the cases. This paper analyzes these problems, and proposes a few-shot ontology alignment model, which can automatically learn how to map two ontologies from only a few training links between their element pairs. The proposed model applies the Siamese neural network in computer vision on ontology alignment and designs an attention detection network learning the attention weights for different ontology attributes. A few experiments that conducted on the anatomy ontology alignment show that our model achieves good performance (94.3% of F-measure) with 200 training alignments without traditional parametric setting.
{"title":"Few-Shot Ontology Alignment Model with Attribute Attentions","authors":"Jingyu Sun, Susumu Takeuchi, I. Yamasaki","doi":"10.1109/COMPSAC48688.2020.00-90","DOIUrl":"https://doi.org/10.1109/COMPSAC48688.2020.00-90","url":null,"abstract":"Nowadays, explosive growth of ontologies are used for managing data in various domains. They usually own different vocabularies and structures following different fashions. Ontology alignment finding semantic correspondences between elements of these ontologies can effectively facilitate the data communication and novel application creation in many practical scenarios. However, we noticed that, the traditional parametric ontology mapping methods still depend on individualistic abilities for setting proper parameters for mapping. When trying to utilize artificial neural networks for the automatic ontology mapping, the training data are found insufficient in most of the cases. This paper analyzes these problems, and proposes a few-shot ontology alignment model, which can automatically learn how to map two ontologies from only a few training links between their element pairs. The proposed model applies the Siamese neural network in computer vision on ontology alignment and designs an attention detection network learning the attention weights for different ontology attributes. A few experiments that conducted on the anatomy ontology alignment show that our model achieves good performance (94.3% of F-measure) with 200 training alignments without traditional parametric setting.","PeriodicalId":430098,"journal":{"name":"2020 IEEE 44th Annual Computers, Software, and Applications Conference (COMPSAC)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116736427","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-07-01DOI: 10.1109/COMPSAC48688.2020.0-159
H. Matsushita, R. Uda
In this paper, we proposed a method for detecting hacked accounts in SNS without predetermined features since trend of topics and slang expressions always change and hackers can make messages which are matched with the predetermined features. There are some researches in which a hacked account or impersonation in SNS is detected. However, they have problems that predetermined features were used in their method or evaluation procedure was not appropriate. On the other hand, in our method, a feature named 'category' is automatically extracted among recent tweets by machine learning. We evaluated the categories with 1,000 test accounts. As a result, 74.4% of the test accounts can be detected with the rate up to 96.0% when they are hacked and only one new message is posted. Moreover, 73.4% of the test accounts can be detected with the rate up to 99.2% by one new posted message. Furthermore, other hacked accounts can also be detected with the same rate when several messages are sequentially posted.
{"title":"Detection of Change of Users in SNS by Two Dimensional CNN","authors":"H. Matsushita, R. Uda","doi":"10.1109/COMPSAC48688.2020.0-159","DOIUrl":"https://doi.org/10.1109/COMPSAC48688.2020.0-159","url":null,"abstract":"In this paper, we proposed a method for detecting hacked accounts in SNS without predetermined features since trend of topics and slang expressions always change and hackers can make messages which are matched with the predetermined features. There are some researches in which a hacked account or impersonation in SNS is detected. However, they have problems that predetermined features were used in their method or evaluation procedure was not appropriate. On the other hand, in our method, a feature named 'category' is automatically extracted among recent tweets by machine learning. We evaluated the categories with 1,000 test accounts. As a result, 74.4% of the test accounts can be detected with the rate up to 96.0% when they are hacked and only one new message is posted. Moreover, 73.4% of the test accounts can be detected with the rate up to 99.2% by one new posted message. Furthermore, other hacked accounts can also be detected with the same rate when several messages are sequentially posted.","PeriodicalId":430098,"journal":{"name":"2020 IEEE 44th Annual Computers, Software, and Applications Conference (COMPSAC)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116030767","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-07-01DOI: 10.1109/COMPSAC48688.2020.00011
Md. Delwar Hossain, Hiroyuki Inoue, H. Ochiai, Doudou Fall, Y. Kadobayashi
The Controller Area Network (CAN) bus system works inside connected cars as a central system for communication between electronic control units (ECUs). Despite its central importance, the CAN does not support an authentication mechanism, i.e., CAN messages are broadcast without basic security features. As a result, it is easy for attackers to launch attacks at the CAN bus network system. Attackers can compromise the CAN bus system in several ways: denial of service, fuzzing, spoofing, etc. It is imperative to devise methodologies to protect modern cars against the aforementioned attacks. In this paper, we propose a Long Short-Term Memory (LSTM)-based Intrusion Detection System (IDS) to detect and mitigate the CAN bus network attacks. We first inject attacks at the CAN bus system in a car that we have at our disposal to generate the attack dataset, which we use to test and train our model. Our results demonstrate that our classifier is efficient in detecting the CAN attacks. We achieved a detection accuracy of 99.9949%.
{"title":"Long Short-Term Memory-Based Intrusion Detection System for In-Vehicle Controller Area Network Bus","authors":"Md. Delwar Hossain, Hiroyuki Inoue, H. Ochiai, Doudou Fall, Y. Kadobayashi","doi":"10.1109/COMPSAC48688.2020.00011","DOIUrl":"https://doi.org/10.1109/COMPSAC48688.2020.00011","url":null,"abstract":"The Controller Area Network (CAN) bus system works inside connected cars as a central system for communication between electronic control units (ECUs). Despite its central importance, the CAN does not support an authentication mechanism, i.e., CAN messages are broadcast without basic security features. As a result, it is easy for attackers to launch attacks at the CAN bus network system. Attackers can compromise the CAN bus system in several ways: denial of service, fuzzing, spoofing, etc. It is imperative to devise methodologies to protect modern cars against the aforementioned attacks. In this paper, we propose a Long Short-Term Memory (LSTM)-based Intrusion Detection System (IDS) to detect and mitigate the CAN bus network attacks. We first inject attacks at the CAN bus system in a car that we have at our disposal to generate the attack dataset, which we use to test and train our model. Our results demonstrate that our classifier is efficient in detecting the CAN attacks. We achieved a detection accuracy of 99.9949%.","PeriodicalId":430098,"journal":{"name":"2020 IEEE 44th Annual Computers, Software, and Applications Conference (COMPSAC)","volume":"194 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116785685","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-07-01DOI: 10.1109/COMPSAC48688.2020.00039
Yingchao Wu, Bo Dong, Q. Zheng, Rongzhe Wei, Zhiwen Wang, Xuanya Li
Tax evasion usually refers to the false declaration of taxpayers to reduce their tax obligations; this type of behavior leads to the loss of taxes and damage to the fair principle of taxation. Tax evasion detection plays a crucial role in reducing tax revenue loss. Currently, efficient auditing methods mainly include traditional data-mining-oriented methods, which cannot be well adapted to the increasingly complicated transaction relationships between taxpayers. Driven by this requirement, recent studies have been conducted by establishing a transaction network and applying the graphical pattern matching algorithm for tax evasion identification. However, such methods rely on expert experience to extract the tax evasion chart pattern, which is time-consuming and labor-intensive. More importantly, taxpayers' basic attributes are not considered and the dual identity of the taxpayer in the transaction network is not well retained. To address this issue, we have proposed a novel tax evasion detection framework via fused transaction network representation (TED-TNR), to detecting tax evasion based on fused transaction network representation, which jointly embeds transaction network topological information and basic taxpayer attributes into low-dimensional vector space, and considers the dual identity of the taxpayer in the transaction network. Finally, we conducted experimental tests on real-world tax data, revealing the superiority of our method, compared with state-of-the-art models.
{"title":"A Novel Tax Evasion Detection Framework via Fused Transaction Network Representation","authors":"Yingchao Wu, Bo Dong, Q. Zheng, Rongzhe Wei, Zhiwen Wang, Xuanya Li","doi":"10.1109/COMPSAC48688.2020.00039","DOIUrl":"https://doi.org/10.1109/COMPSAC48688.2020.00039","url":null,"abstract":"Tax evasion usually refers to the false declaration of taxpayers to reduce their tax obligations; this type of behavior leads to the loss of taxes and damage to the fair principle of taxation. Tax evasion detection plays a crucial role in reducing tax revenue loss. Currently, efficient auditing methods mainly include traditional data-mining-oriented methods, which cannot be well adapted to the increasingly complicated transaction relationships between taxpayers. Driven by this requirement, recent studies have been conducted by establishing a transaction network and applying the graphical pattern matching algorithm for tax evasion identification. However, such methods rely on expert experience to extract the tax evasion chart pattern, which is time-consuming and labor-intensive. More importantly, taxpayers' basic attributes are not considered and the dual identity of the taxpayer in the transaction network is not well retained. To address this issue, we have proposed a novel tax evasion detection framework via fused transaction network representation (TED-TNR), to detecting tax evasion based on fused transaction network representation, which jointly embeds transaction network topological information and basic taxpayer attributes into low-dimensional vector space, and considers the dual identity of the taxpayer in the transaction network. Finally, we conducted experimental tests on real-world tax data, revealing the superiority of our method, compared with state-of-the-art models.","PeriodicalId":430098,"journal":{"name":"2020 IEEE 44th Annual Computers, Software, and Applications Conference (COMPSAC)","volume":"110 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123572707","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-07-01DOI: 10.1109/COMPSAC48688.2020.00013
B. K. Sreedhar, Nagarajan Shunmugam
The field of self-driving cars is a fast-growing one, and numerous companies and organizations are working at the forefront of this technology. One of the major requirements for self-driving cars is the necessity of expensive hardware to run complex models. This project aims to identify a suitable deep learning model under hardware constraints. We obtain the results of a supervised model trained with data from a human driver and compare it to a reinforcement learning-based approach. Both models will be trained and tested on devices with low-end hardware, and their results visualized with the help of a driving simulator. The objective is to demonstrate that even a simple model with enough data augmentation can perform specific tasks and does not require much investment in time and money. We also aim to introduce portability to deep learning models by trying to deploy the model in a mobile device and show that it can work as a standalone module.
{"title":"Deep Learning for Hardware-Constrained Driverless Cars","authors":"B. K. Sreedhar, Nagarajan Shunmugam","doi":"10.1109/COMPSAC48688.2020.00013","DOIUrl":"https://doi.org/10.1109/COMPSAC48688.2020.00013","url":null,"abstract":"The field of self-driving cars is a fast-growing one, and numerous companies and organizations are working at the forefront of this technology. One of the major requirements for self-driving cars is the necessity of expensive hardware to run complex models. This project aims to identify a suitable deep learning model under hardware constraints. We obtain the results of a supervised model trained with data from a human driver and compare it to a reinforcement learning-based approach. Both models will be trained and tested on devices with low-end hardware, and their results visualized with the help of a driving simulator. The objective is to demonstrate that even a simple model with enough data augmentation can perform specific tasks and does not require much investment in time and money. We also aim to introduce portability to deep learning models by trying to deploy the model in a mobile device and show that it can work as a standalone module.","PeriodicalId":430098,"journal":{"name":"2020 IEEE 44th Annual Computers, Software, and Applications Conference (COMPSAC)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123363338","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-07-01DOI: 10.1109/COMPSAC48688.2020.00-61
M. Tits, Benjamin Bernaud, Amel Achour, Maher Badri, L. Guedria
Recently, most European distribution systems (DS) are overwhelmed by the coupled growth of decentralized production and residential appliance volatility. To cope with this issue, new solutions are emerging, such as local energy storage and energetic community management. The latter aims for the collective self-consumption maximization of the locally-produced energy through optimal planning of flexible appliances, to reduce DS maintenance costs and energy loss. The quality of short-term load forecasting is key in this process. However, it depends on various factors, foremost including the characteristics of the concerned energetic community. In this paper, we propose a methodology and a use case, based on randomized sampling for the simulation of virtual energetic communities (VEC). From the numerous simulated VEC, statistical analysis allows to assess the impact of the VEC characteristics (such as size, resident type and availability of historical data) on its predictability. From a 2-year dataset of 52 households recorded in a Belgian city, we quantify the impacts of these characteristics, and show that for this specific case study, a trade-off for efficient forecasting can be reached for a community of about 10-30 households and 2-12 months of history length.
{"title":"Impacts of Size and History Length on Energetic Community Load Forecasting: A Case Study","authors":"M. Tits, Benjamin Bernaud, Amel Achour, Maher Badri, L. Guedria","doi":"10.1109/COMPSAC48688.2020.00-61","DOIUrl":"https://doi.org/10.1109/COMPSAC48688.2020.00-61","url":null,"abstract":"Recently, most European distribution systems (DS) are overwhelmed by the coupled growth of decentralized production and residential appliance volatility. To cope with this issue, new solutions are emerging, such as local energy storage and energetic community management. The latter aims for the collective self-consumption maximization of the locally-produced energy through optimal planning of flexible appliances, to reduce DS maintenance costs and energy loss. The quality of short-term load forecasting is key in this process. However, it depends on various factors, foremost including the characteristics of the concerned energetic community. In this paper, we propose a methodology and a use case, based on randomized sampling for the simulation of virtual energetic communities (VEC). From the numerous simulated VEC, statistical analysis allows to assess the impact of the VEC characteristics (such as size, resident type and availability of historical data) on its predictability. From a 2-year dataset of 52 households recorded in a Belgian city, we quantify the impacts of these characteristics, and show that for this specific case study, a trade-off for efficient forecasting can be reached for a community of about 10-30 households and 2-12 months of history length.","PeriodicalId":430098,"journal":{"name":"2020 IEEE 44th Annual Computers, Software, and Applications Conference (COMPSAC)","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124863852","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Distributed applications in web services have become increasingly complex in response to various user demands. Consequently, system administrators have difficulty understanding inter-process dependencies in distributed applications. When parts of the system are changed or augmented, they cannot identify the area of influence by the change, which might engender a more damaging outage than expected. Therefore, they must trace dependencies automatically among unknown processes. An earlier method discovered the dependency by detecting the transport connection using the Linux packet filter on the hosts at ends of the network connection. However, the extra delay to the application traffic increases because of the additional processing inherent in the packet processing in the Linux kernel. As described herein, we propose an architecture of monitoring network sockets, which are endpoints of TCP connections, to trace the dependency. As long as applications use the TCP protocol stack in the Linux kernel, the dependencies are discovered by our architecture. Therefore, monitoring processing only reads the connection information from network sockets. The processing is independent of the application communication. Therefore, the monitoring does not affect the network delay of the applications. Our experiments confirmed that our architecture reduced the delay overhead by 13–20 % and the resource load by 43.5 % compared to earlier reported methods.
{"title":"Transtracer: Socket-Based Tracing of Network Dependencies Among Processes in Distributed Applications","authors":"Yuuki Tsubouchi, Masahiro Furukawa, Ryosuke Matsumoto","doi":"10.1109/COMPSAC48688.2020.00-92","DOIUrl":"https://doi.org/10.1109/COMPSAC48688.2020.00-92","url":null,"abstract":"Distributed applications in web services have become increasingly complex in response to various user demands. Consequently, system administrators have difficulty understanding inter-process dependencies in distributed applications. When parts of the system are changed or augmented, they cannot identify the area of influence by the change, which might engender a more damaging outage than expected. Therefore, they must trace dependencies automatically among unknown processes. An earlier method discovered the dependency by detecting the transport connection using the Linux packet filter on the hosts at ends of the network connection. However, the extra delay to the application traffic increases because of the additional processing inherent in the packet processing in the Linux kernel. As described herein, we propose an architecture of monitoring network sockets, which are endpoints of TCP connections, to trace the dependency. As long as applications use the TCP protocol stack in the Linux kernel, the dependencies are discovered by our architecture. Therefore, monitoring processing only reads the connection information from network sockets. The processing is independent of the application communication. Therefore, the monitoring does not affect the network delay of the applications. Our experiments confirmed that our architecture reduced the delay overhead by 13–20 % and the resource load by 43.5 % compared to earlier reported methods.","PeriodicalId":430098,"journal":{"name":"2020 IEEE 44th Annual Computers, Software, and Applications Conference (COMPSAC)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125268840","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-07-01DOI: 10.1109/COMPSAC48688.2020.00-11
Fang Liu, E. Eugenio, Ick-Hoon Jin, C. Bowen
Many social networks contain sensitive relational information. One approach to protect the sensitive relational information while offering flexibility for social network research and analysis is to release synthetic social networks at a pre-specified privacy risk level, given the original observed network. We propose the DP-ERGM procedure that synthesizes networks that satisfy the differential privacy (DP) via the exponential random graph model (EGRM). We apply DP-ERGM to a college student friendship network and compare its original network information preservation in the generated private networks with two other approaches: differentially private DyadWise Randomized Response (DWRR) and Sanitization of the Conditional probability of Edge given Attribute classes (SCEA). The results suggest that DP-EGRM preserves the original information significantly better than DWRR and SCEA in both network statistics and inferences from ERGMs and latent space models. In addition, DP-ERGM satisfies the node DP, a stronger notion of privacy than the edge DP that DWRR and SCEA satisfy.
{"title":"Differentially Private Generation of Social Networks via Exponential Random Graph Models","authors":"Fang Liu, E. Eugenio, Ick-Hoon Jin, C. Bowen","doi":"10.1109/COMPSAC48688.2020.00-11","DOIUrl":"https://doi.org/10.1109/COMPSAC48688.2020.00-11","url":null,"abstract":"Many social networks contain sensitive relational information. One approach to protect the sensitive relational information while offering flexibility for social network research and analysis is to release synthetic social networks at a pre-specified privacy risk level, given the original observed network. We propose the DP-ERGM procedure that synthesizes networks that satisfy the differential privacy (DP) via the exponential random graph model (EGRM). We apply DP-ERGM to a college student friendship network and compare its original network information preservation in the generated private networks with two other approaches: differentially private DyadWise Randomized Response (DWRR) and Sanitization of the Conditional probability of Edge given Attribute classes (SCEA). The results suggest that DP-EGRM preserves the original information significantly better than DWRR and SCEA in both network statistics and inferences from ERGMs and latent space models. In addition, DP-ERGM satisfies the node DP, a stronger notion of privacy than the edge DP that DWRR and SCEA satisfy.","PeriodicalId":430098,"journal":{"name":"2020 IEEE 44th Annual Computers, Software, and Applications Conference (COMPSAC)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124133924","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-07-01DOI: 10.1109/COMPSAC48688.2020.00-94
A. Gálvez, A. Iglesias, E. Osaba, J. Ser
This paper presents a new artificial intelligence-based method to address the following problem: given an initial digital image (source image), and a modification of the image (mod image) obtained from the source through a color map and visual attributes assumed to be unknown, determine suitable values for color map and contrast such that, when applied to the mod image, a similar image to the source is obtained. This problem has several applications in the fields of image restoration and cleaning. Our approach is based on the application of a powerful swarm intelligence method called bat algorithm. The method is tested on an illustrative example of the digital image of a famous oil painting. The experimental results show that the method performs very well, with a similarity error rate between the source and the reconstructed images of only 8.37%.
{"title":"Bat Algorithm Method for Automatic Determination of Color and Contrast of Modified Digital Images","authors":"A. Gálvez, A. Iglesias, E. Osaba, J. Ser","doi":"10.1109/COMPSAC48688.2020.00-94","DOIUrl":"https://doi.org/10.1109/COMPSAC48688.2020.00-94","url":null,"abstract":"This paper presents a new artificial intelligence-based method to address the following problem: given an initial digital image (source image), and a modification of the image (mod image) obtained from the source through a color map and visual attributes assumed to be unknown, determine suitable values for color map and contrast such that, when applied to the mod image, a similar image to the source is obtained. This problem has several applications in the fields of image restoration and cleaning. Our approach is based on the application of a powerful swarm intelligence method called bat algorithm. The method is tested on an illustrative example of the digital image of a famous oil painting. The experimental results show that the method performs very well, with a similarity error rate between the source and the reconstructed images of only 8.37%.","PeriodicalId":430098,"journal":{"name":"2020 IEEE 44th Annual Computers, Software, and Applications Conference (COMPSAC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129122997","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-07-01DOI: 10.1109/COMPSAC48688.2020.000-2
Zheng Li, Yanzhao Xi, Ruilian Zhao
By scheduling algorithms in the low-level algorithm library, the hyper-heuristic algorithm can help to effectively select an appropriate method to deal with hard computational search problems. The hyper-heuristic algorithm usually includes a high-level scheduling layer and a low-level algorithm layer. The high-level strategy layer selects the algorithm for the next scheduling by evaluating the execution effect of the different algorithms in the low-level layer, while the low-level layer includes a variety of different heuristic algorithms which called algorithm library. The concrete hyper-heuristic framework for multi-objective test case prioritization was presented where the 18 multi-objective algorithms were formed in the low-level library. It has been gradually realized that a hybrid algorithm by combining single objective algorithm and multi-objective optimization algorithm is better than the individual. This paper explores the influence of the construction pattern of algorithm library for the hyper-heuristic algorithm by constructing the fusion pattern of different types of algorithms.
{"title":"A Hybrid Algorithms Construction of Hyper-Heuristic for Test Case Prioritization","authors":"Zheng Li, Yanzhao Xi, Ruilian Zhao","doi":"10.1109/COMPSAC48688.2020.000-2","DOIUrl":"https://doi.org/10.1109/COMPSAC48688.2020.000-2","url":null,"abstract":"By scheduling algorithms in the low-level algorithm library, the hyper-heuristic algorithm can help to effectively select an appropriate method to deal with hard computational search problems. The hyper-heuristic algorithm usually includes a high-level scheduling layer and a low-level algorithm layer. The high-level strategy layer selects the algorithm for the next scheduling by evaluating the execution effect of the different algorithms in the low-level layer, while the low-level layer includes a variety of different heuristic algorithms which called algorithm library. The concrete hyper-heuristic framework for multi-objective test case prioritization was presented where the 18 multi-objective algorithms were formed in the low-level library. It has been gradually realized that a hybrid algorithm by combining single objective algorithm and multi-objective optimization algorithm is better than the individual. This paper explores the influence of the construction pattern of algorithm library for the hyper-heuristic algorithm by constructing the fusion pattern of different types of algorithms.","PeriodicalId":430098,"journal":{"name":"2020 IEEE 44th Annual Computers, Software, and Applications Conference (COMPSAC)","volume":"131 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129542328","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}