Identifying photovoltaic (PV) parameters accurately and reliably can be conducive to the effective use of solar energy. The grey wolf optimizer (GWO) that was proposed recently is an effective nature-inspired method and has become an effective way to solve PV parameter identification. However, determining PV parameters is typically regarded as a multimodal optimization, which is a challenging optimization problem; thus, the original GWO still has the problem of insufficient accuracy and reliability when identifying PV parameters. In this study, an enhanced grey wolf optimizer with fusion strategies (EGWOFS) is proposed to overcome these shortcomings. First, a modified multiple learning backtracking search algorithm (MMLBSA) is designed to ameliorate the global exploration potential of the original GWO. Second, a dynamic spiral updating position strategy (DSUPS) is constructed to promote the performance of local exploitation. Finally, the proposed EGWOFS is verified by two groups of test data, which include three types of PV test models and experimental data extracted from the manufacturer’s data sheet. Experiments show that the overall performance of the proposed EGWOFS achieves competitive or better results in terms of accuracy and reliability for most test models.
{"title":"An enhanced grey wolf optimizer with fusion strategies for identifying the parameters of photovoltaic models","authors":"Jinkun Luo, Fazhi He, Xiaoxin Gao","doi":"10.3233/ica-220693","DOIUrl":"https://doi.org/10.3233/ica-220693","url":null,"abstract":"Identifying photovoltaic (PV) parameters accurately and reliably can be conducive to the effective use of solar energy. The grey wolf optimizer (GWO) that was proposed recently is an effective nature-inspired method and has become an effective way to solve PV parameter identification. However, determining PV parameters is typically regarded as a multimodal optimization, which is a challenging optimization problem; thus, the original GWO still has the problem of insufficient accuracy and reliability when identifying PV parameters. In this study, an enhanced grey wolf optimizer with fusion strategies (EGWOFS) is proposed to overcome these shortcomings. First, a modified multiple learning backtracking search algorithm (MMLBSA) is designed to ameliorate the global exploration potential of the original GWO. Second, a dynamic spiral updating position strategy (DSUPS) is constructed to promote the performance of local exploitation. Finally, the proposed EGWOFS is verified by two groups of test data, which include three types of PV test models and experimental data extracted from the manufacturer’s data sheet. Experiments show that the overall performance of the proposed EGWOFS achieves competitive or better results in terms of accuracy and reliability for most test models.","PeriodicalId":50358,"journal":{"name":"Integrated Computer-Aided Engineering","volume":"93 1","pages":"89-104"},"PeriodicalIF":6.5,"publicationDate":"2022-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83572554","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hao Pu, Xinjie Wan, Taoran Song, P. Schonfeld, Wei Li, Jianping Hu
Railway alignment design is a complicated problem affected by intricate environmental factors. Although numerous alignment optimization methods have been proposed, a general limitation among them is the lack of a spatial environmental suitability analysis to guide the subsequent alignment search. Consequently, many unfavorable regions in the study area are still searched, which significantly degrades optimization efficiency. To solve this problem, a geographic information model is proposed for evaluating the environmental suitability of railways. Initially, the study area is abstracted as a spatial voxel set and the 3-D reachable ranges of railways are determined. Then, a geographic information model is devised which considers topographic influencing factors (including those affecting structural cost and stability) as well as geologic influencing factors (including landslides and seismic impacts) for different railway structures. Afterward, a 3-D environmental suitability map can be generated using a multi-criteria decision-making approach to combine the considered factors. The map is further integrated into the alignment optimization process based on a 3-D distance transform algorithm. The proposed model and method are applied to two complex realistic railway cases. The results demonstrate that they can considerably improve the search efficiency and also find better alignments compared to the best alternatives obtained manually by experienced human designers and produced by a previous distance transform algorithm as well as a genetic algorithm.
{"title":"A geographic information model for 3-D environmental suitability analysis in railway alignment optimization","authors":"Hao Pu, Xinjie Wan, Taoran Song, P. Schonfeld, Wei Li, Jianping Hu","doi":"10.3233/ica-220692","DOIUrl":"https://doi.org/10.3233/ica-220692","url":null,"abstract":"Railway alignment design is a complicated problem affected by intricate environmental factors. Although numerous alignment optimization methods have been proposed, a general limitation among them is the lack of a spatial environmental suitability analysis to guide the subsequent alignment search. Consequently, many unfavorable regions in the study area are still searched, which significantly degrades optimization efficiency. To solve this problem, a geographic information model is proposed for evaluating the environmental suitability of railways. Initially, the study area is abstracted as a spatial voxel set and the 3-D reachable ranges of railways are determined. Then, a geographic information model is devised which considers topographic influencing factors (including those affecting structural cost and stability) as well as geologic influencing factors (including landslides and seismic impacts) for different railway structures. Afterward, a 3-D environmental suitability map can be generated using a multi-criteria decision-making approach to combine the considered factors. The map is further integrated into the alignment optimization process based on a 3-D distance transform algorithm. The proposed model and method are applied to two complex realistic railway cases. The results demonstrate that they can considerably improve the search efficiency and also find better alignments compared to the best alternatives obtained manually by experienced human designers and produced by a previous distance transform algorithm as well as a genetic algorithm.","PeriodicalId":50358,"journal":{"name":"Integrated Computer-Aided Engineering","volume":"67 1","pages":"67-88"},"PeriodicalIF":6.5,"publicationDate":"2022-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84070254","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shareeful Islam, Abdulrazaq Abba, Umar Mukhtar Ismail, H. Mouratidis, Spyridon Papastergiou
Healthcare organisations are constantly facing sophisticated cyberattacks due to the sensitivity and criticality of patient health care information and wide connectivity of medical devices. Such attacks can pose potential disruptions to critical services delivery. There are number of existing works that focus on using Machine Learning (ML) models for predicting vulnerability and exploitation but most of these works focused on parameterized values to predict severity and exploitability. This paper proposes a novel method that uses ontology axioms to define essential concepts related to the overall healthcare ecosystem and to ensure semantic consistency checking among such concepts. The application of ontology enables the formal specification and description of healthcare ecosystem and the key elements used in vulnerability assessment as a set of concepts. Such specification also strengthens the relationships that exist between healthcare-based and vulnerability assessment concepts, in addition to semantic definition and reasoning of the concepts. Our work also makes use of Machine Learning techniques to predict possible security vulnerabilities in health care supply chain services. The paper demonstrates the applicability of our work by using vulnerability datasets to predict the exploitation. The results show that the conceptualization of healthcare sector cybersecurity using an ontological approach provides mechanisms to better understand the correlation between the healthcare sector and the security domain, while the ML algorithms increase the accuracy of the vulnerability exploitability prediction. Our result shows that using Linear Regression, Decision Tree and Random Forest provided a reasonable result for predicting vulnerability exploitability.
{"title":"Vulnerability prediction for secure healthcare supply chain service delivery","authors":"Shareeful Islam, Abdulrazaq Abba, Umar Mukhtar Ismail, H. Mouratidis, Spyridon Papastergiou","doi":"10.3233/ica-220689","DOIUrl":"https://doi.org/10.3233/ica-220689","url":null,"abstract":"Healthcare organisations are constantly facing sophisticated cyberattacks due to the sensitivity and criticality of patient health care information and wide connectivity of medical devices. Such attacks can pose potential disruptions to critical services delivery. There are number of existing works that focus on using Machine Learning (ML) models for predicting vulnerability and exploitation but most of these works focused on parameterized values to predict severity and exploitability. This paper proposes a novel method that uses ontology axioms to define essential concepts related to the overall healthcare ecosystem and to ensure semantic consistency checking among such concepts. The application of ontology enables the formal specification and description of healthcare ecosystem and the key elements used in vulnerability assessment as a set of concepts. Such specification also strengthens the relationships that exist between healthcare-based and vulnerability assessment concepts, in addition to semantic definition and reasoning of the concepts. Our work also makes use of Machine Learning techniques to predict possible security vulnerabilities in health care supply chain services. The paper demonstrates the applicability of our work by using vulnerability datasets to predict the exploitation. The results show that the conceptualization of healthcare sector cybersecurity using an ontological approach provides mechanisms to better understand the correlation between the healthcare sector and the security domain, while the ML algorithms increase the accuracy of the vulnerability exploitability prediction. Our result shows that using Linear Regression, Decision Tree and Random Forest provided a reasonable result for predicting vulnerability exploitability.","PeriodicalId":50358,"journal":{"name":"Integrated Computer-Aided Engineering","volume":"151 1","pages":"389-409"},"PeriodicalIF":6.5,"publicationDate":"2022-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86164092","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Georgios D. Karatzinis, P. Michailidis, Iakovos T. Michailidis, Athanasios Ch. Kapoutsis, E. Kosmatopoulos, Y. Boutalis
In order to sufficiently protect active personnel and physical environment from hazardous leaks, recent industrial practices integrate innovative multi-modalities so as to maximize response efficiency. Since the early detection of such incidents portrays the most critical factor for providing efficient response measures, the continuous and reliable surveying of industrial spaces is of primary importance. Current study develops a surveying mechanism, utilizing a swarm of heterogeneous aerial mobile sensory platforms, for the continuous monitoring and detection of CH4 dispersed gas plumes. In order to timely represent the CH4 diffusion progression incident, the research concerns a simulated indoor, geometrically complex environment, where early detection and timely response are critical. The primary aim was to evaluate the efficiency of a novel multi-agent, closed-loop, algorithm responsible for the UAV path-planning of the swarm, in comparison with an efficient a state-of-the-art path-planning EGO methodology, acting as a benchmark. Abbreviated as Block Coordinate Descent Cognitive Adaptive Optimization (BCD-CAO) the novel algorithm outperformed the Efficient Global Optimization (EGO) algorithm, in seven simulation scenarios, demonstrating improved dynamic adaptation of the aerial UAV swarm towards its heterogeneous operational capabilities. The evaluation results presented herein, exhibit the efficiency of the proposed algorithm for continuously conforming the mobile sensing platforms’ formation towards maximizing the total measured density of the diffused volume plume.
{"title":"Coordinating heterogeneous mobile sensing platforms for effectively monitoring a dispersed gas plume","authors":"Georgios D. Karatzinis, P. Michailidis, Iakovos T. Michailidis, Athanasios Ch. Kapoutsis, E. Kosmatopoulos, Y. Boutalis","doi":"10.3233/ica-220690","DOIUrl":"https://doi.org/10.3233/ica-220690","url":null,"abstract":"In order to sufficiently protect active personnel and physical environment from hazardous leaks, recent industrial practices integrate innovative multi-modalities so as to maximize response efficiency. Since the early detection of such incidents portrays the most critical factor for providing efficient response measures, the continuous and reliable surveying of industrial spaces is of primary importance. Current study develops a surveying mechanism, utilizing a swarm of heterogeneous aerial mobile sensory platforms, for the continuous monitoring and detection of CH4 dispersed gas plumes. In order to timely represent the CH4 diffusion progression incident, the research concerns a simulated indoor, geometrically complex environment, where early detection and timely response are critical. The primary aim was to evaluate the efficiency of a novel multi-agent, closed-loop, algorithm responsible for the UAV path-planning of the swarm, in comparison with an efficient a state-of-the-art path-planning EGO methodology, acting as a benchmark. Abbreviated as Block Coordinate Descent Cognitive Adaptive Optimization (BCD-CAO) the novel algorithm outperformed the Efficient Global Optimization (EGO) algorithm, in seven simulation scenarios, demonstrating improved dynamic adaptation of the aerial UAV swarm towards its heterogeneous operational capabilities. The evaluation results presented herein, exhibit the efficiency of the proposed algorithm for continuously conforming the mobile sensing platforms’ formation towards maximizing the total measured density of the diffused volume plume.","PeriodicalId":50358,"journal":{"name":"Integrated Computer-Aided Engineering","volume":"40 1","pages":"411-429"},"PeriodicalIF":6.5,"publicationDate":"2022-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72977756","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Safe navigation at sea is more important than ever. Cargo is usually transported by vessel because it makes economic sense. However, marine accidents can cause huge losses of people, cargo, and the vessel itself, as well as irreversible ecological disasters. These are the reasons to strive for safe vessel navigation. The navigator shall ensure safe vessel navigation. He must plan every maneuver and act safely. At the same time, he must evaluate and predict the actions of other vessels in dense maritime traffic. This is a complicated process and requires constant human concentration. It is a very tiring and long-lasting duty. Therefore, human error is the main reason of collisions between vessels. In this paper, different reinforcement learning strategies have been explored in order to find the most appropriate one for the real-life problem of ensuring safe maneuvring in maritime traffic. An experiment using different algorithms was conducted to discover a suitable method for autonomous vessel navigation. The experiments indicate that the most effective algorithm (Deep SARSA) allows reaching 92.08% accuracy. The efficiency of the proposed model is demonstrated through a real-life collision between two vessels and how it could have been avoided.
{"title":"Reinforcement learning strategies for vessel navigation","authors":"Andrius Daranda, G. Dzemyda","doi":"10.3233/ica-220688","DOIUrl":"https://doi.org/10.3233/ica-220688","url":null,"abstract":"Safe navigation at sea is more important than ever. Cargo is usually transported by vessel because it makes economic sense. However, marine accidents can cause huge losses of people, cargo, and the vessel itself, as well as irreversible ecological disasters. These are the reasons to strive for safe vessel navigation. The navigator shall ensure safe vessel navigation. He must plan every maneuver and act safely. At the same time, he must evaluate and predict the actions of other vessels in dense maritime traffic. This is a complicated process and requires constant human concentration. It is a very tiring and long-lasting duty. Therefore, human error is the main reason of collisions between vessels. In this paper, different reinforcement learning strategies have been explored in order to find the most appropriate one for the real-life problem of ensuring safe maneuvring in maritime traffic. An experiment using different algorithms was conducted to discover a suitable method for autonomous vessel navigation. The experiments indicate that the most effective algorithm (Deep SARSA) allows reaching 92.08% accuracy. The efficiency of the proposed model is demonstrated through a real-life collision between two vessels and how it could have been avoided.","PeriodicalId":50358,"journal":{"name":"Integrated Computer-Aided Engineering","volume":"65 1","pages":"53-66"},"PeriodicalIF":6.5,"publicationDate":"2022-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76427332","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Danial Katoozian, Hossein Hosseini-Nejad, M. Dehaqani, A. Shoeibi, J. Górriz
Motor intention decoding is one of the most challenging issues in brain machine interface (BMI). Despite several important studies on accurate algorithms, the decoding stage is still processed on a computer, which makes the solution impractical for implantable applications due to its size and power consumption. This study aimed to provide an appropriate real-time decoding approach for implantable BMIs by proposing an agile decoding algorithm with a new input model and implementing efficient hardware. This method, unlike common ones employed firing rate as input, used a new input space based on spike train temporal information. The proposed approach was evaluated based on a real dataset recorded from frontal eye field (FEF) of two male rhesus monkeys with eight possible angles as the output space and presented a decoding accuracy of 62%. Furthermore, a hardware architecture was designed as an application-specific integrated circuit (ASIC) chip for real-time neural decoding based on the proposed algorithm. The designed chip was implemented in the standard complementary metal-oxide-semiconductor (CMOS) 180 nm technology, occupied an area of 4.15 mm2, and consumed 28.58 μW @1.8 V power supply.
{"title":"A hardware efficient intra-cortical neural decoding approach based on spike train temporal information","authors":"Danial Katoozian, Hossein Hosseini-Nejad, M. Dehaqani, A. Shoeibi, J. Górriz","doi":"10.3233/ica-220687","DOIUrl":"https://doi.org/10.3233/ica-220687","url":null,"abstract":"Motor intention decoding is one of the most challenging issues in brain machine interface (BMI). Despite several important studies on accurate algorithms, the decoding stage is still processed on a computer, which makes the solution impractical for implantable applications due to its size and power consumption. This study aimed to provide an appropriate real-time decoding approach for implantable BMIs by proposing an agile decoding algorithm with a new input model and implementing efficient hardware. This method, unlike common ones employed firing rate as input, used a new input space based on spike train temporal information. The proposed approach was evaluated based on a real dataset recorded from frontal eye field (FEF) of two male rhesus monkeys with eight possible angles as the output space and presented a decoding accuracy of 62%. Furthermore, a hardware architecture was designed as an application-specific integrated circuit (ASIC) chip for real-time neural decoding based on the proposed algorithm. The designed chip was implemented in the standard complementary metal-oxide-semiconductor (CMOS) 180 nm technology, occupied an area of 4.15 mm2, and consumed 28.58 μW @1.8 V power supply.","PeriodicalId":50358,"journal":{"name":"Integrated Computer-Aided Engineering","volume":"15 1","pages":"431-445"},"PeriodicalIF":6.5,"publicationDate":"2022-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91036611","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Automated machine learning (AutoML) supports ML engineers and data scientist by automating single tasks like model selection and hyperparameter optimization, automatically generating entire ML pipelines. This article presents a survey of 20 state-of-the-art AutoML solutions, open source and commercial. There is a wide range of functionalities, targeted user groups, support for ML libraries, and degrees of maturity. Depending on the AutoML solution, a user may be locked into one specific ML library technology or one product ecosystem. Additionally, the user might require some expertise in data science and programming for using the AutoML solution. We propose a concept called OMA-ML (Ontology-based Meta AutoML) that combines the features of existing AutoML solutions by integrating them (Meta AutoML). OMA-ML can incorporate any AutoML solution allowing various user groups to generate ML pipelines with the ML library of choice. An ontology is the information backbone of OMA-ML. OMA-ML is being implemented as an open source solution with currently third-party 7 AutoML solutions being integrated.
{"title":"Ontology-based Meta AutoML","authors":"Alexander Zender, B. Humm","doi":"10.3233/ica-220684","DOIUrl":"https://doi.org/10.3233/ica-220684","url":null,"abstract":"Automated machine learning (AutoML) supports ML engineers and data scientist by automating single tasks like model selection and hyperparameter optimization, automatically generating entire ML pipelines. This article presents a survey of 20 state-of-the-art AutoML solutions, open source and commercial. There is a wide range of functionalities, targeted user groups, support for ML libraries, and degrees of maturity. Depending on the AutoML solution, a user may be locked into one specific ML library technology or one product ecosystem. Additionally, the user might require some expertise in data science and programming for using the AutoML solution. We propose a concept called OMA-ML (Ontology-based Meta AutoML) that combines the features of existing AutoML solutions by integrating them (Meta AutoML). OMA-ML can incorporate any AutoML solution allowing various user groups to generate ML pipelines with the ML library of choice. An ontology is the information backbone of OMA-ML. OMA-ML is being implemented as an open source solution with currently third-party 7 AutoML solutions being integrated.","PeriodicalId":50358,"journal":{"name":"Integrated Computer-Aided Engineering","volume":"45 1","pages":"351-366"},"PeriodicalIF":6.5,"publicationDate":"2022-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84955742","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Konstantinos Demertzis, L. Iliadis, Panayotis Kikiras, E. Pimenidis
Training a model using batch learning requires uniform data storage in a repository. This approach is intrusive, as users have to expose their privacy and exchange sensitive data by sending them to central entities to be preprocessed. Unlike the aforementioned centralized approach, training of intelligent models via the federated learning (FEDL) mechanism can be carried out using decentralized data. This process ensures that privacy and protection of sensitive information can be managed by a user or an organization, employing a single universal model for all users. This model should apply average aggregation methods to the set of cooperative training data. This raises serious concerns for the effectiveness of this universal approach and, therefore, for the validity of FEDL architectures in general. Generally, it flattens the unique needs of individual users without considering the local events to be managed. This paper proposes an innovative hybrid explainable semi-personalized federated learning model, that utilizes Shapley Values and Lipschitz Constant techniques, in order to create personalized intelligent models. It is based on the needs and events that each individual user is required to address in a federated format. Explanations are the assortment of characteristics of the interpretable system, which, in the case of a specified illustration, helped to bring about a conclusion and provided the function of the model on both local and global levels. Retraining is suggested only for those features for which the degree of change is considered quite important for the evolution of its functionality.
{"title":"An explainable semi-personalized federated learning model","authors":"Konstantinos Demertzis, L. Iliadis, Panayotis Kikiras, E. Pimenidis","doi":"10.3233/ica-220683","DOIUrl":"https://doi.org/10.3233/ica-220683","url":null,"abstract":"Training a model using batch learning requires uniform data storage in a repository. This approach is intrusive, as users have to expose their privacy and exchange sensitive data by sending them to central entities to be preprocessed. Unlike the aforementioned centralized approach, training of intelligent models via the federated learning (FEDL) mechanism can be carried out using decentralized data. This process ensures that privacy and protection of sensitive information can be managed by a user or an organization, employing a single universal model for all users. This model should apply average aggregation methods to the set of cooperative training data. This raises serious concerns for the effectiveness of this universal approach and, therefore, for the validity of FEDL architectures in general. Generally, it flattens the unique needs of individual users without considering the local events to be managed. This paper proposes an innovative hybrid explainable semi-personalized federated learning model, that utilizes Shapley Values and Lipschitz Constant techniques, in order to create personalized intelligent models. It is based on the needs and events that each individual user is required to address in a federated format. Explanations are the assortment of characteristics of the interpretable system, which, in the case of a specified illustration, helped to bring about a conclusion and provided the function of the model on both local and global levels. Retraining is suggested only for those features for which the degree of change is considered quite important for the evolution of its functionality.","PeriodicalId":50358,"journal":{"name":"Integrated Computer-Aided Engineering","volume":"26 1","pages":"335-350"},"PeriodicalIF":6.5,"publicationDate":"2022-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76513850","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Manuel Carranza-García, F. J. Galán-Sales, José María Luna-Romera, José Cristóbal Riquelme Santos
Autonomous vehicles are equipped with complimentary sensors to perceive the environment accurately. Deep learning models have proven to be the most effective approach for computer vision problems. Therefore, in autonomous driving, it is essential to design reliable networks to fuse data from different sensors. In this work, we develop a novel data fusion architecture using camera and LiDAR data for object detection in autonomous driving. Given the sparsity of LiDAR data, developing multi-modal fusion models is a challenging task. Our proposal integrates an efficient LiDAR sparse-to-dense completion network into the pipeline of object detection models, achieving a more robust performance at different times of the day. The Waymo Open Dataset has been used for the experimental study, which is the most diverse detection benchmark in terms of weather and lighting conditions. The depth completion network is trained with the KITTI depth dataset, and transfer learning is used to obtain dense maps on Waymo. With the enhanced LiDAR data and the camera images, we explore early and middle fusion approaches using popular object detection models. The proposed data fusion network provides a significant improvement compared to single-modal detection at all times of the day, and outperforms previous approaches that upsample depth maps with classical image processing algorithms. Our multi-modal and multi-source approach achieves a 1.5, 7.5, and 2.1 mean AP increase at day, night, and dawn/dusk, respectively, using four different object detection meta-architectures.
{"title":"Object detection using depth completion and camera-LiDAR fusion for autonomous driving","authors":"Manuel Carranza-García, F. J. Galán-Sales, José María Luna-Romera, José Cristóbal Riquelme Santos","doi":"10.3233/ica-220681","DOIUrl":"https://doi.org/10.3233/ica-220681","url":null,"abstract":"Autonomous vehicles are equipped with complimentary sensors to perceive the environment accurately. Deep learning models have proven to be the most effective approach for computer vision problems. Therefore, in autonomous driving, it is essential to design reliable networks to fuse data from different sensors. In this work, we develop a novel data fusion architecture using camera and LiDAR data for object detection in autonomous driving. Given the sparsity of LiDAR data, developing multi-modal fusion models is a challenging task. Our proposal integrates an efficient LiDAR sparse-to-dense completion network into the pipeline of object detection models, achieving a more robust performance at different times of the day. The Waymo Open Dataset has been used for the experimental study, which is the most diverse detection benchmark in terms of weather and lighting conditions. The depth completion network is trained with the KITTI depth dataset, and transfer learning is used to obtain dense maps on Waymo. With the enhanced LiDAR data and the camera images, we explore early and middle fusion approaches using popular object detection models. The proposed data fusion network provides a significant improvement compared to single-modal detection at all times of the day, and outperforms previous approaches that upsample depth maps with classical image processing algorithms. Our multi-modal and multi-source approach achieves a 1.5, 7.5, and 2.1 mean AP increase at day, night, and dawn/dusk, respectively, using four different object detection meta-architectures.","PeriodicalId":50358,"journal":{"name":"Integrated Computer-Aided Engineering","volume":"133 1","pages":"241-258"},"PeriodicalIF":6.5,"publicationDate":"2022-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79374833","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Multi-center clustering algorithms have attracted the attention of researchers because they can deal with complex data sets more effectively. However, the reasonable determination of cluster centers and their number as well as the final clusters is a challenging problem. In order to solve this problem, we propose a multi-center clustering algorithm based on mutual nearest neighbors (briefly MC-MNN). Firstly, we design a center-point discovery algorithm based on mutual nearest neighbors, which can adaptively find center points without any parameters for data sets with different density distributions. Then, a sub-cluster discovery algorithm is designed based on the connection of center points. This algorithm can effectively utilize the role of multiple center points, and can effectively cluster non-convex data sets. Finally, we design a merging algorithm, which can effectively obtain final clusters based on the degree of overlapping and distance between sub-clusters. Compared with existing algorithms, the MC-MNN has four advantages: (1) It can automatically obtain center points by using the mutual nearest neighbors; (2) It runs without any parameters; (3) It can adaptively find the final number of clusters; (4) It can effectively cluster arbitrarily distributed data sets. Experiments show the effectiveness of the MC-MNN and its superiority is verified by comparing with five related algorithms.
{"title":"A multi-center clustering algorithm based on mutual nearest neighbors for arbitrarily distributed data","authors":"Wuning Tong, Yuping Wang, Delong Liu, Xiulin Guo","doi":"10.3233/ica-220682","DOIUrl":"https://doi.org/10.3233/ica-220682","url":null,"abstract":"Multi-center clustering algorithms have attracted the attention of researchers because they can deal with complex data sets more effectively. However, the reasonable determination of cluster centers and their number as well as the final clusters is a challenging problem. In order to solve this problem, we propose a multi-center clustering algorithm based on mutual nearest neighbors (briefly MC-MNN). Firstly, we design a center-point discovery algorithm based on mutual nearest neighbors, which can adaptively find center points without any parameters for data sets with different density distributions. Then, a sub-cluster discovery algorithm is designed based on the connection of center points. This algorithm can effectively utilize the role of multiple center points, and can effectively cluster non-convex data sets. Finally, we design a merging algorithm, which can effectively obtain final clusters based on the degree of overlapping and distance between sub-clusters. Compared with existing algorithms, the MC-MNN has four advantages: (1) It can automatically obtain center points by using the mutual nearest neighbors; (2) It runs without any parameters; (3) It can adaptively find the final number of clusters; (4) It can effectively cluster arbitrarily distributed data sets. Experiments show the effectiveness of the MC-MNN and its superiority is verified by comparing with five related algorithms.","PeriodicalId":50358,"journal":{"name":"Integrated Computer-Aided Engineering","volume":"45 1","pages":"259-275"},"PeriodicalIF":6.5,"publicationDate":"2022-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74017698","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}