Most recent solutions for users’ authentication in Industry 4.0 scenarios are based on unique biological characteristics that are captured from users and recognized using artificial intelligence and machine learning technologies. These biometric applications tend to be computationally heavy, so to monitor users in an unobtrusive manner, sensing and processing modules are physically separated and connected through point-to-point wireless communication technologies. However, in this approach, sensors are very resource constrained, and common cryptographic techniques to protect private users’ information while traveling in the radio channel cannot be implemented because their computational cost. Thus, new security solutions for those biometric authentication systems in their short-range wireless communications are needed. Therefore, in this paper, we propose a new cryptographic approach addressing this scenario. The proposed solution employs lightweight operations to create a secure symmetric encryption solution. This cipher includes a pseudo-random number generator based, also, on simple computationally low-cost operations in order to create the secret key. In order to preserve and provide good security properties, the key generation and the encryption processes are fed with a chaotic number sequence obtained through the numerical integration of a new four-order hyperchaotic dynamic. An experimental analysis and a performance evaluation are provided in the experimental section, showing the good behavior of the described solution.
{"title":"Lightweight encryption for short-range wireless biometric authentication systems in Industry 4.0","authors":"Borja Bordel, R. Alcarria, T. Robles","doi":"10.3233/ica-210673","DOIUrl":"https://doi.org/10.3233/ica-210673","url":null,"abstract":"Most recent solutions for users’ authentication in Industry 4.0 scenarios are based on unique biological characteristics that are captured from users and recognized using artificial intelligence and machine learning technologies. These biometric applications tend to be computationally heavy, so to monitor users in an unobtrusive manner, sensing and processing modules are physically separated and connected through point-to-point wireless communication technologies. However, in this approach, sensors are very resource constrained, and common cryptographic techniques to protect private users’ information while traveling in the radio channel cannot be implemented because their computational cost. Thus, new security solutions for those biometric authentication systems in their short-range wireless communications are needed. Therefore, in this paper, we propose a new cryptographic approach addressing this scenario. The proposed solution employs lightweight operations to create a secure symmetric encryption solution. This cipher includes a pseudo-random number generator based, also, on simple computationally low-cost operations in order to create the secret key. In order to preserve and provide good security properties, the key generation and the encryption processes are fed with a chaotic number sequence obtained through the numerical integration of a new four-order hyperchaotic dynamic. An experimental analysis and a performance evaluation are provided in the experimental section, showing the good behavior of the described solution.","PeriodicalId":50358,"journal":{"name":"Integrated Computer-Aided Engineering","volume":"65 1","pages":"153-173"},"PeriodicalIF":6.5,"publicationDate":"2021-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86408346","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Editorial: Making an impact","authors":"F. Klawonn","doi":"10.3233/ica-210670","DOIUrl":"https://doi.org/10.3233/ica-210670","url":null,"abstract":"","PeriodicalId":50358,"journal":{"name":"Integrated Computer-Aided Engineering","volume":"28 1","pages":"1-2"},"PeriodicalIF":6.5,"publicationDate":"2021-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89449665","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Anastasios Alexiadis, Angeliki Veliskaki, Alexandros Nizamis, A. Bintoudi, L. Zyglakis, Andreas K Triantafyllidis, Ioannis Koskinas, D. Ioannidis, K. Votis, D. Tzovaras
In recent years, the growing use of Intelligent Personal Agents in different human activities and in various domains led the corresponding research to focus on the design and development of agents that are not limited to interaction with humans and execution of simple tasks. The latest research efforts have introduced Intelligent Personal Agents that utilize Natural Language Understanding (NLU) modules and Machine Learning (ML) techniques in order to have complex dialogues with humans, execute complex plans of actions and effectively control smart devices. To this aim, this article introduces the second generation of the CERTH Intelligent Personal Agent (CIPA) which is based on the RASA framework and utilizes two machine learning models for NLU and dialogue flow classification. CIPA-Generation B provides a dialogue-story generator that is based on the idea of adjacency pairs and multiple intents, that are classifying complex sentences consisting of two users’ intents into two automatic operations. More importantly, the agent can form a plan of actions for implicit Demand-Response and execute it, based on the user’s request and by utilizing AI Planning methods. The introduced CIPA-Generation B has been deployed and tested in a real-world scenario at Centre’s of Research & Technology Hellas (CERTH) nZEB SmartHome in two different domains, energy and health, for multiple intent recognition and dialogue handling. Furthermore, in the energy domain, a scenario that demonstrates how the agent solves an implicit Demand-Response problem has been applied and evaluated. An experimental study with 36 participants further illustrates the usefulness and acceptance of the developed conversational agent-based system.
{"title":"A smarthome conversational agent performing implicit demand-response application planning","authors":"Anastasios Alexiadis, Angeliki Veliskaki, Alexandros Nizamis, A. Bintoudi, L. Zyglakis, Andreas K Triantafyllidis, Ioannis Koskinas, D. Ioannidis, K. Votis, D. Tzovaras","doi":"10.3233/ica-210669","DOIUrl":"https://doi.org/10.3233/ica-210669","url":null,"abstract":"In recent years, the growing use of Intelligent Personal Agents in different human activities and in various domains led the corresponding research to focus on the design and development of agents that are not limited to interaction with humans and execution of simple tasks. The latest research efforts have introduced Intelligent Personal Agents that utilize Natural Language Understanding (NLU) modules and Machine Learning (ML) techniques in order to have complex dialogues with humans, execute complex plans of actions and effectively control smart devices. To this aim, this article introduces the second generation of the CERTH Intelligent Personal Agent (CIPA) which is based on the RASA framework and utilizes two machine learning models for NLU and dialogue flow classification. CIPA-Generation B provides a dialogue-story generator that is based on the idea of adjacency pairs and multiple intents, that are classifying complex sentences consisting of two users’ intents into two automatic operations. More importantly, the agent can form a plan of actions for implicit Demand-Response and execute it, based on the user’s request and by utilizing AI Planning methods. The introduced CIPA-Generation B has been deployed and tested in a real-world scenario at Centre’s of Research & Technology Hellas (CERTH) nZEB SmartHome in two different domains, energy and health, for multiple intent recognition and dialogue handling. Furthermore, in the energy domain, a scenario that demonstrates how the agent solves an implicit Demand-Response problem has been applied and evaluated. An experimental study with 36 participants further illustrates the usefulness and acceptance of the developed conversational agent-based system.","PeriodicalId":50358,"journal":{"name":"Integrated Computer-Aided Engineering","volume":"29 1","pages":"43-61"},"PeriodicalIF":6.5,"publicationDate":"2021-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"69926485","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fátima Leal, Bruno Veloso, Benedita Malheiro, J. C. Burguillo, Adriana E. Chis, H. González-Vélez
Explainable recommendations enable users to understand why certain items are suggested and, ultimately, nurture system transparency, trustworthiness, and confidence. Large crowdsourcing recommendation systems ought to crucially promote authenticity and transparency of recommendations. To address such challenge, this paper proposes the use of stream-based explainable recommendations via blockchain profiling. Our contribution relies on chained historical data to improve the quality and transparency of online collaborative recommendation filters – Memory-based and Model-based – using, as use cases, data streamed from two large tourism crowdsourcing platforms, namely Expedia and TripAdvisor. Building historical trust-based models of raters, our method is implemented as an external module and integrated with the collaborative filter through a post-recommendation component. The inter-user trust profiling history, traceability and authenticity are ensured by blockchain, since these profiles are stored as a smart contract in a private Ethereum network. Our empirical evaluation with HotelExpedia and Tripadvisor has consistently shown the positive impact of blockchain-based profiling on the quality (measured as recall) and transparency (determined via explanations) of recommendations.
{"title":"Stream-based explainable recommendations via blockchain profiling","authors":"Fátima Leal, Bruno Veloso, Benedita Malheiro, J. C. Burguillo, Adriana E. Chis, H. González-Vélez","doi":"10.3233/ica-210668","DOIUrl":"https://doi.org/10.3233/ica-210668","url":null,"abstract":"Explainable recommendations enable users to understand why certain items are suggested and, ultimately, nurture system transparency, trustworthiness, and confidence. Large crowdsourcing recommendation systems ought to crucially promote authenticity and transparency of recommendations. To address such challenge, this paper proposes the use of stream-based explainable recommendations via blockchain profiling. Our contribution relies on chained historical data to improve the quality and transparency of online collaborative recommendation filters – Memory-based and Model-based – using, as use cases, data streamed from two large tourism crowdsourcing platforms, namely Expedia and TripAdvisor. Building historical trust-based models of raters, our method is implemented as an external module and integrated with the collaborative filter through a post-recommendation component. The inter-user trust profiling history, traceability and authenticity are ensured by blockchain, since these profiles are stored as a smart contract in a private Ethereum network. Our empirical evaluation with HotelExpedia and Tripadvisor has consistently shown the positive impact of blockchain-based profiling on the quality (measured as recall) and transparency (determined via explanations) of recommendations.","PeriodicalId":50358,"journal":{"name":"Integrated Computer-Aided Engineering","volume":"277 1","pages":"105-121"},"PeriodicalIF":6.5,"publicationDate":"2021-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76243188","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Accurate segmentation of casting defects plays a positive role in the quality control of casting products, and is of great significance for accurate extraction of the mechanical properties of defects in the casting solidification process. However, as the shape of casting defects is complex and irregular, it is challenging to segment casting defects by existing segmentation methods. To address this, a spectrum domain instance segmentation model (SISN) is proposed for segmenting five types of casting defects with complex shapes accurately. The five defects are inclusion, shrinkage, hot tearing, cold tearing and micro pore. The proposed model consists of three sub-models: the spectrum domain region proposal model (SRPN), spectrum domain region of interest alignment model (SRoIAlign) and spectrum domain instance generation model (SIGN). SRPN uses a multi-scale anchoring mechanism to detect defects of various sizes, where the SSReLU and SCPool functions are used to solve the spectrum domain gradient explosion problem and the spectrum domain over-fitting problem. SRoIAlign uses the floating-point quantization operation and the tri-linear interpolation method to quantize the 3D proposals to the feature values in an accurate manner. SIGN is a full-spectrum domain neural network applied to 3D proposals, generating a segmentation instance of defects in a point-wise manner. In the experiments, we test the effectiveness of the proposed model from three aspects: segmentation accuracy, time performance and mechanical property extraction accuracy.
{"title":"A spectrum-domain instance segmentation model for casting defects","authors":"Jinhua Lin, Lin Ma, Yu Yao","doi":"10.3233/ica-210666","DOIUrl":"https://doi.org/10.3233/ica-210666","url":null,"abstract":"Accurate segmentation of casting defects plays a positive role in the quality control of casting products, and is of great significance for accurate extraction of the mechanical properties of defects in the casting solidification process. However, as the shape of casting defects is complex and irregular, it is challenging to segment casting defects by existing segmentation methods. To address this, a spectrum domain instance segmentation model (SISN) is proposed for segmenting five types of casting defects with complex shapes accurately. The five defects are inclusion, shrinkage, hot tearing, cold tearing and micro pore. The proposed model consists of three sub-models: the spectrum domain region proposal model (SRPN), spectrum domain region of interest alignment model (SRoIAlign) and spectrum domain instance generation model (SIGN). SRPN uses a multi-scale anchoring mechanism to detect defects of various sizes, where the SSReLU and SCPool functions are used to solve the spectrum domain gradient explosion problem and the spectrum domain over-fitting problem. SRoIAlign uses the floating-point quantization operation and the tri-linear interpolation method to quantize the 3D proposals to the feature values in an accurate manner. SIGN is a full-spectrum domain neural network applied to 3D proposals, generating a segmentation instance of defects in a point-wise manner. In the experiments, we test the effectiveness of the proposed model from three aspects: segmentation accuracy, time performance and mechanical property extraction accuracy.","PeriodicalId":50358,"journal":{"name":"Integrated Computer-Aided Engineering","volume":"29 1","pages":"63-82"},"PeriodicalIF":6.5,"publicationDate":"2021-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"69926927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Activity recognition technologies only present a good performance in controlled conditions, where a limited number of actions are allowed. On the contrary, industrial applications are scenarios with real and uncontrolled conditions where thousands of different activities (such as transporting or manufacturing craft products), with an incredible variability, may be developed. In this context, new and enhanced human activity recognition technologies are needed. Therefore, in this paper, a new activity recognition technology, focused on Industry 4.0 scenarios, is proposed. The proposed mechanism consists of different steps, including a first analysis phase where physical signals are processed using moving averages, filters and signal processing techniques, and an atomic recognition step where Dynamic Time Warping technologies and k-nearest neighbors solutions are integrated; a second phase where activities are modeled using generalized Markov models and context labels are recognized using a multi-layer perceptron; and a third step where activities are recognized using the previously created Markov models and context information, formatted as labels. The proposed solution achieves the best recognition rate of 87% which demonstrates the efficacy of the described method. Compared to the state-of-the-art solutions, an improvement up to 10% is reported.
{"title":"Recognizing human activities in Industry 4.0 scenarios through an analysis-modeling- recognition algorithm and context labels","authors":"Borja Bordel, R. Alcarria, T. Robles","doi":"10.3233/ica-210667","DOIUrl":"https://doi.org/10.3233/ica-210667","url":null,"abstract":"Activity recognition technologies only present a good performance in controlled conditions, where a limited number of actions are allowed. On the contrary, industrial applications are scenarios with real and uncontrolled conditions where thousands of different activities (such as transporting or manufacturing craft products), with an incredible variability, may be developed. In this context, new and enhanced human activity recognition technologies are needed. Therefore, in this paper, a new activity recognition technology, focused on Industry 4.0 scenarios, is proposed. The proposed mechanism consists of different steps, including a first analysis phase where physical signals are processed using moving averages, filters and signal processing techniques, and an atomic recognition step where Dynamic Time Warping technologies and k-nearest neighbors solutions are integrated; a second phase where activities are modeled using generalized Markov models and context labels are recognized using a multi-layer perceptron; and a third step where activities are recognized using the previously created Markov models and context information, formatted as labels. The proposed solution achieves the best recognition rate of 87% which demonstrates the efficacy of the described method. Compared to the state-of-the-art solutions, an improvement up to 10% is reported.","PeriodicalId":50358,"journal":{"name":"Integrated Computer-Aided Engineering","volume":"42 1","pages":"83-103"},"PeriodicalIF":6.5,"publicationDate":"2021-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88462253","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In classification tasks, feature selection (FS) can reduce the data dimensionality and may also improve classification accuracy, both of which are commonly treated as the two objectives in FS problems. Many meta-heuristic algorithms have been applied to solve the FS problems and they perform satisfactorily when the problem is relatively simple. However, once the dimensionality of the datasets grows, their performance drops dramatically. This paper proposes a self-adaptive multi-objective genetic algorithm (SaMOGA) for FS, which is designed to maintain a high performance even when the dimensionality of the datasets grows. The main concept of SaMOGA lies in the dynamic selection of five different crossover operators in different evolution process by applying a self-adaptive mechanism. Meanwhile, a search stagnation detection mechanism is also proposed to prevent premature convergence. In the experiments, we compare SaMOGA with five multi-objective FS algorithms on sixteen datasets. According to the experimental results, SaMOGA yields a set of well converged and well distributed solutions on most data sets, indicating that SaMOGA can guarantee classification performance while removing many features, and the advantage over its counterparts is more obvious when the dimensionality of datasets grows.
{"title":"A self-adaptive multi-objective feature selection approach for classification problems","authors":"Yu Xue, Hao Zhu, Ferrante Neri","doi":"10.3233/ica-210664","DOIUrl":"https://doi.org/10.3233/ica-210664","url":null,"abstract":"In classification tasks, feature selection (FS) can reduce the data dimensionality and may also improve classification accuracy, both of which are commonly treated as the two objectives in FS problems. Many meta-heuristic algorithms have been applied to solve the FS problems and they perform satisfactorily when the problem is relatively simple. However, once the dimensionality of the datasets grows, their performance drops dramatically. This paper proposes a self-adaptive multi-objective genetic algorithm (SaMOGA) for FS, which is designed to maintain a high performance even when the dimensionality of the datasets grows. The main concept of SaMOGA lies in the dynamic selection of five different crossover operators in different evolution process by applying a self-adaptive mechanism. Meanwhile, a search stagnation detection mechanism is also proposed to prevent premature convergence. In the experiments, we compare SaMOGA with five multi-objective FS algorithms on sixteen datasets. According to the experimental results, SaMOGA yields a set of well converged and well distributed solutions on most data sets, indicating that SaMOGA can guarantee classification performance while removing many features, and the advantage over its counterparts is more obvious when the dimensionality of datasets grows.","PeriodicalId":50358,"journal":{"name":"Integrated Computer-Aided Engineering","volume":"15 1","pages":"3-21"},"PeriodicalIF":6.5,"publicationDate":"2021-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86031287","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Manufacturing digitalisation is a critical part of the transition towards Industry 4.0. Digital twin plays a significant role as the instrument that enables digital access to precise real-time information about physical objects and supports the optimisation of the related processes through conversion of the big data associated with them into actionable information. A number of frameworks and conceptual models has been proposed in the research literature that addresses the requirements and benefits of digital twins, yet their applications are explored to a lesser extent. A time-domain machining vibration model based on a generative adversarial network (GAN) is proposed as a digital twin component in this paper. The developed conditional StyleGAN architecture enables (1) the extraction of knowledge from existing models and (2) a data-driven simulation applicable for production process optimisation. A novel solution to the challenges in GAN analysis is then developed, where the comparison of maps of generative accuracy and sensitivity reveals patterns of similarity between these metrics. The sensitivity analysis is also extended to the mid-layer network level, identifying the sources of abnormal generative behaviour. This provides a sensitivity-based simulation uncertainty estimate, which is important for validation of the optimal process conditions derived from the proposed model.
{"title":"Conditional StyleGAN modelling and analysis for a machining digital twin","authors":"E. Zotov, Ashutosh Tiwari, V. Kadirkamanathan","doi":"10.3233/ICA-210662","DOIUrl":"https://doi.org/10.3233/ICA-210662","url":null,"abstract":"Manufacturing digitalisation is a critical part of the transition towards Industry 4.0. Digital twin plays a significant role as the instrument that enables digital access to precise real-time information about physical objects and supports the optimisation of the related processes through conversion of the big data associated with them into actionable information. A number of frameworks and conceptual models has been proposed in the research literature that addresses the requirements and benefits of digital twins, yet their applications are explored to a lesser extent. A time-domain machining vibration model based on a generative adversarial network (GAN) is proposed as a digital twin component in this paper. The developed conditional StyleGAN architecture enables (1) the extraction of knowledge from existing models and (2) a data-driven simulation applicable for production process optimisation. A novel solution to the challenges in GAN analysis is then developed, where the comparison of maps of generative accuracy and sensitivity reveals patterns of similarity between these metrics. The sensitivity analysis is also extended to the mid-layer network level, identifying the sources of abnormal generative behaviour. This provides a sensitivity-based simulation uncertainty estimate, which is important for validation of the optimal process conditions derived from the proposed model.","PeriodicalId":50358,"journal":{"name":"Integrated Computer-Aided Engineering","volume":"75 1","pages":"399-415"},"PeriodicalIF":6.5,"publicationDate":"2021-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83839839","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
3D mesh subdivision is essential for geometry modeling of complex surfaces, which benefits many important applications in the fields of multimedia such as computer animation. However, in the ordinary adaptive subdivision, with the deepening of the subdivision level, the benefits gained from the improvement of smoothness cannot keep pace with the cost caused by the incremental number of faces. To mitigate the gap between the smoothness and the number of faces, this paper devises a novel improved mesh subdivision method to coordinate the smoothness and the number of faces in a harmonious way. First, this paper introduces a variable threshold, rather than a constant threshold used in existing adaptive subdivision methods, to reduce the number of redundant faces while keeping the smoothness in each subdivision iteration. Second, to achieve the above goal, a new crack-solving method is developed to remove the cracks by refining the adjacent faces of the subdivided area. Third, as a result, the problem of coordinating the smoothness and the number of faces can be formulated as a multi-objective optimization problem, in which the possible threshold sequences constitute the solution space. Finally, the Non-dominated sorting genetic algorithm II (NSGA-II) is improved to efficiently search the Pareto frontier. Extensive experiments demonstrate that the proposed method consistently outperforms existing mesh subdivision methods in different settings.
{"title":"An improved loop subdivision to coordinate the smoothness and the number of faces via multi-objective optimization","authors":"Yaqian Liang, Fazhi He, Xiantao Zeng, Jinkun Luo","doi":"10.3233/ICA-210661","DOIUrl":"https://doi.org/10.3233/ICA-210661","url":null,"abstract":"3D mesh subdivision is essential for geometry modeling of complex surfaces, which benefits many important applications in the fields of multimedia such as computer animation. However, in the ordinary adaptive subdivision, with the deepening of the subdivision level, the benefits gained from the improvement of smoothness cannot keep pace with the cost caused by the incremental number of faces. To mitigate the gap between the smoothness and the number of faces, this paper devises a novel improved mesh subdivision method to coordinate the smoothness and the number of faces in a harmonious way. First, this paper introduces a variable threshold, rather than a constant threshold used in existing adaptive subdivision methods, to reduce the number of redundant faces while keeping the smoothness in each subdivision iteration. Second, to achieve the above goal, a new crack-solving method is developed to remove the cracks by refining the adjacent faces of the subdivided area. Third, as a result, the problem of coordinating the smoothness and the number of faces can be formulated as a multi-objective optimization problem, in which the possible threshold sequences constitute the solution space. Finally, the Non-dominated sorting genetic algorithm II (NSGA-II) is improved to efficiently search the Pareto frontier. Extensive experiments demonstrate that the proposed method consistently outperforms existing mesh subdivision methods in different settings.","PeriodicalId":50358,"journal":{"name":"Integrated Computer-Aided Engineering","volume":"29 1","pages":"23-41"},"PeriodicalIF":6.5,"publicationDate":"2021-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.3233/ICA-210661","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"69926845","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Liapis, Konstantinos Christantonis, Victor Chazan Pantzalis, Anastassios Manos, D. Filippidou, Christos Tjortjis
This paper presents a novel methodology using classification for day-ahead traffic prediction. It addresses the research question whether traffic state can be forecasted based on meteorological conditions, seasonality, and time intervals, as well as COVID-19 related restrictions. We propose reliable models utilizing smaller data partitions. Apart from feature selection, we incorporate new features related to movement restrictions due to COVID-19, forming a novel data model. Our methodology explores the desired training subset. Results showed that various models can be developed, with varying levels of success. The best outcome was achieved when factoring in all relevant features and training on a proposed subset. Accuracy improved significantly compared to previously published work.
{"title":"A methodology using classification for traffic prediction: Featuring the impact of COVID-19","authors":"S. Liapis, Konstantinos Christantonis, Victor Chazan Pantzalis, Anastassios Manos, D. Filippidou, Christos Tjortjis","doi":"10.3233/ICA-210663","DOIUrl":"https://doi.org/10.3233/ICA-210663","url":null,"abstract":"This paper presents a novel methodology using classification for day-ahead traffic prediction. It addresses the research question whether traffic state can be forecasted based on meteorological conditions, seasonality, and time intervals, as well as COVID-19 related restrictions. We propose reliable models utilizing smaller data partitions. Apart from feature selection, we incorporate new features related to movement restrictions due to COVID-19, forming a novel data model. Our methodology explores the desired training subset. Results showed that various models can be developed, with varying levels of success. The best outcome was achieved when factoring in all relevant features and training on a proposed subset. Accuracy improved significantly compared to previously published work.","PeriodicalId":50358,"journal":{"name":"Integrated Computer-Aided Engineering","volume":"31 1","pages":"417-435"},"PeriodicalIF":6.5,"publicationDate":"2021-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87681958","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}