Pub Date : 2021-12-05DOI: 10.1109/SSCI50451.2021.9660042
Motoki Sato, A. Oyama
Capturing diversity of non-dominated and dominated solutions in decision space is important for realworld multiobjective optimization to provide a decision maker many options. This paper studies how different crossover operators affect diversity of non-dominated and dominated solutions in decision space obtained by multiobjective evolutionary algorithms (MOEA). We compare the solutions obtained by NSGA-II with simulated binary crossover (SBX), unimodal normally distributed crossover (UNDX), reproduction process of differential evolution (DE), or blend crossover (BLX-α) for speed reducer design (SRD) problem and Mazda problem. The result shows that selection of crossover operator significantly affects diversity of non-dominated and dominated solutions in the decision space obtained by MOEA.
{"title":"Comparative Study of Crossovers for Decision Space Diversity of Non-Dominated Solutions","authors":"Motoki Sato, A. Oyama","doi":"10.1109/SSCI50451.2021.9660042","DOIUrl":"https://doi.org/10.1109/SSCI50451.2021.9660042","url":null,"abstract":"Capturing diversity of non-dominated and dominated solutions in decision space is important for realworld multiobjective optimization to provide a decision maker many options. This paper studies how different crossover operators affect diversity of non-dominated and dominated solutions in decision space obtained by multiobjective evolutionary algorithms (MOEA). We compare the solutions obtained by NSGA-II with simulated binary crossover (SBX), unimodal normally distributed crossover (UNDX), reproduction process of differential evolution (DE), or blend crossover (BLX-α) for speed reducer design (SRD) problem and Mazda problem. The result shows that selection of crossover operator significantly affects diversity of non-dominated and dominated solutions in the decision space obtained by MOEA.","PeriodicalId":255763,"journal":{"name":"2021 IEEE Symposium Series on Computational Intelligence (SSCI)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128588620","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-05DOI: 10.1109/SSCI50451.2021.9660093
Emrecan Ozdogan, Norman C. Sabin, Thomas Gracie, Steven Portley, Mali Halac, Thomas Coard, William Trimble, B. Sokhansanj, G. Rosen, R. Polikar
Genome sequencing generates large volumes of data and hence requires increasingly higher computational resources. The growing data problem is even more acute in metagenomics applications, where data from an environmental sample include many organisms instead of just one for the common single organism sequencing. Traditional taxonomic classification and clustering approaches and platforms - while designed to be computationally efficient - are not capable of incrementally updating a previously trained system when new data arrive, which then requires complete re-training with the augmented (old plus new) data. Such complete retraining is inefficient and leads to poor utilization of computational resources. An ability to update a classification system with only new data offers a much lower run-time as new data are presented, and does not require the approach to be re-trained on the entire previous dataset. In this paper, we propose Incremental VSEARCH (I-VSEARCH) and its semi-supervised version for taxonomic classification, as well as a threshold independent VSEARCH (TI-VSEARCH) as wrappers around VSEARCH, a well-established (unsupervised) clustering algorithm for metagenomics. We show - on a 16S rRNA gene dataset - that I-VSEARCH, running incrementally only on the new batches of data that become available over time, does not lose any accuracy over VSEARCH that runs on the full data, while providing attractive computational benefits.
{"title":"Incremental and Semi-Supervised Learning of 16S-rRNA Genes For Taxonomic Classification","authors":"Emrecan Ozdogan, Norman C. Sabin, Thomas Gracie, Steven Portley, Mali Halac, Thomas Coard, William Trimble, B. Sokhansanj, G. Rosen, R. Polikar","doi":"10.1109/SSCI50451.2021.9660093","DOIUrl":"https://doi.org/10.1109/SSCI50451.2021.9660093","url":null,"abstract":"Genome sequencing generates large volumes of data and hence requires increasingly higher computational resources. The growing data problem is even more acute in metagenomics applications, where data from an environmental sample include many organisms instead of just one for the common single organism sequencing. Traditional taxonomic classification and clustering approaches and platforms - while designed to be computationally efficient - are not capable of incrementally updating a previously trained system when new data arrive, which then requires complete re-training with the augmented (old plus new) data. Such complete retraining is inefficient and leads to poor utilization of computational resources. An ability to update a classification system with only new data offers a much lower run-time as new data are presented, and does not require the approach to be re-trained on the entire previous dataset. In this paper, we propose Incremental VSEARCH (I-VSEARCH) and its semi-supervised version for taxonomic classification, as well as a threshold independent VSEARCH (TI-VSEARCH) as wrappers around VSEARCH, a well-established (unsupervised) clustering algorithm for metagenomics. We show - on a 16S rRNA gene dataset - that I-VSEARCH, running incrementally only on the new batches of data that become available over time, does not lose any accuracy over VSEARCH that runs on the full data, while providing attractive computational benefits.","PeriodicalId":255763,"journal":{"name":"2021 IEEE Symposium Series on Computational Intelligence (SSCI)","volume":"169 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128631088","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-05DOI: 10.1109/SSCI50451.2021.9659908
Q. Zhang, Feiqi Fu, Kaixiang Zhang, Feng Lin, Jian Wang
The quality of images taken in an insufficiently lighting environment is degraded. These images limit the presentation of machine vision technology. To address the issue, many researchers have focused on enhancing low-light images. This paper presents a zero-reference learning method to enhance low-light images. A deep network is built for estimating the illumination component of the low-light image. We use the original image and the derivative graph to define a zero-reference loss function based on illumination constraints and priori conditions. Then the deep network is trained by minimizing the loss function. Final image is obtained according to the Retinex theory. In addition, we use fractional-order mask to preserve image details and naturalness. Experiments on several datasets demonstrate that the proposed algorithm can achieve low-light image enhancement. Experimental results indicate that the superiority of our algorithm over state-of-the-arts algorithms.
{"title":"Zero-Reference Fractional-Order Low-Light Image Enhancement Based on Retinex Theory","authors":"Q. Zhang, Feiqi Fu, Kaixiang Zhang, Feng Lin, Jian Wang","doi":"10.1109/SSCI50451.2021.9659908","DOIUrl":"https://doi.org/10.1109/SSCI50451.2021.9659908","url":null,"abstract":"The quality of images taken in an insufficiently lighting environment is degraded. These images limit the presentation of machine vision technology. To address the issue, many researchers have focused on enhancing low-light images. This paper presents a zero-reference learning method to enhance low-light images. A deep network is built for estimating the illumination component of the low-light image. We use the original image and the derivative graph to define a zero-reference loss function based on illumination constraints and priori conditions. Then the deep network is trained by minimizing the loss function. Final image is obtained according to the Retinex theory. In addition, we use fractional-order mask to preserve image details and naturalness. Experiments on several datasets demonstrate that the proposed algorithm can achieve low-light image enhancement. Experimental results indicate that the superiority of our algorithm over state-of-the-arts algorithms.","PeriodicalId":255763,"journal":{"name":"2021 IEEE Symposium Series on Computational Intelligence (SSCI)","volume":"2000 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128649584","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-05DOI: 10.1109/SSCI50451.2021.9660125
Thanh Tran, Kien Bui Huy, Nhat Truong Pham, M. Carratù, C. Liguori, J. Lundgren
Sounds always contain acoustic noise and background noise that affects the accuracy of the sound classification system. Hence, suppression of noise in the sound can improve the robustness of the sound classification model. This paper investigated a sound separation technique that separates the input sound into many overlapped-content Short-Time Fourier Transform (STFT) frames. Our approach is different from the traditional STFT conversion method, which converts each sound into a single STFT image. Contradictory, separating the sound into many STFT frames improves model prediction accuracy by increasing variability in the data and therefore learning from that variability. These separated frames are saved as images and then labeled manually as clean and noisy frames which are then fed into transfer learning convolutional neural networks (CNNs) for the classification task. The pre-trained CNN architectures that learn from these frames become robust against the noise. The experimental results show that the proposed approach is robust against noise and achieves 94.14% in terms of classifying 21 classes including 20 classes of sound events and a noisy class. An open-source repository of the proposed method and results is available at https://github.com/nhattruongpham/soundSepsound.
{"title":"Separate Sound into STFT Frames to Eliminate Sound Noise Frames in Sound Classification","authors":"Thanh Tran, Kien Bui Huy, Nhat Truong Pham, M. Carratù, C. Liguori, J. Lundgren","doi":"10.1109/SSCI50451.2021.9660125","DOIUrl":"https://doi.org/10.1109/SSCI50451.2021.9660125","url":null,"abstract":"Sounds always contain acoustic noise and background noise that affects the accuracy of the sound classification system. Hence, suppression of noise in the sound can improve the robustness of the sound classification model. This paper investigated a sound separation technique that separates the input sound into many overlapped-content Short-Time Fourier Transform (STFT) frames. Our approach is different from the traditional STFT conversion method, which converts each sound into a single STFT image. Contradictory, separating the sound into many STFT frames improves model prediction accuracy by increasing variability in the data and therefore learning from that variability. These separated frames are saved as images and then labeled manually as clean and noisy frames which are then fed into transfer learning convolutional neural networks (CNNs) for the classification task. The pre-trained CNN architectures that learn from these frames become robust against the noise. The experimental results show that the proposed approach is robust against noise and achieves 94.14% in terms of classifying 21 classes including 20 classes of sound events and a noisy class. An open-source repository of the proposed method and results is available at https://github.com/nhattruongpham/soundSepsound.","PeriodicalId":255763,"journal":{"name":"2021 IEEE Symposium Series on Computational Intelligence (SSCI)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129446177","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-05DOI: 10.1109/SSCI50451.2021.9660160
Jaromír Konecny, Monika Borova, Michal Prauzek
The Internet of Things concept raises the possibility of connecting monitoring stations to the Internet. In many cases, these devices are equipped with a wireless interface which allows the transmission of data through a low-power wide-area network (LPWAN). This type of network has a limited data throughput due to technological limitations and regional restrictions. There are many research challenges in maximizing the useful transmitted information through a limited transmission channel. The paper presents self-learning wavelet compression method controlled by Q-Learning (QL), which is able to optimize an amount of transmitted data using lossy compression. The aim is to use transmission channel throughput as effectively as possible without the loss of data. A QL agent selects an appropriate compression method according to buffer use and maintains this level at 70 %. The proposed method was tested on environmental historical data. The results showed that our method is able to use more than 96 % of the available transmission channel throughput with minimal data loss, even if the communications channel throughput experiences significant changes.
{"title":"Self-learning Wavelet Compression Method for Data Transmission from Environmental Monitoring Stations with a Low Bandwidth IoT Interface","authors":"Jaromír Konecny, Monika Borova, Michal Prauzek","doi":"10.1109/SSCI50451.2021.9660160","DOIUrl":"https://doi.org/10.1109/SSCI50451.2021.9660160","url":null,"abstract":"The Internet of Things concept raises the possibility of connecting monitoring stations to the Internet. In many cases, these devices are equipped with a wireless interface which allows the transmission of data through a low-power wide-area network (LPWAN). This type of network has a limited data throughput due to technological limitations and regional restrictions. There are many research challenges in maximizing the useful transmitted information through a limited transmission channel. The paper presents self-learning wavelet compression method controlled by Q-Learning (QL), which is able to optimize an amount of transmitted data using lossy compression. The aim is to use transmission channel throughput as effectively as possible without the loss of data. A QL agent selects an appropriate compression method according to buffer use and maintains this level at 70 %. The proposed method was tested on environmental historical data. The results showed that our method is able to use more than 96 % of the available transmission channel throughput with minimal data loss, even if the communications channel throughput experiences significant changes.","PeriodicalId":255763,"journal":{"name":"2021 IEEE Symposium Series on Computational Intelligence (SSCI)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126815116","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-05DOI: 10.1109/SSCI50451.2021.9659966
Giovanna Maria Dimitri, C. Gallicchio, A. Micheli, M.A. Morales, E. Ungaro, F. Vozzi
In the present study recurrent neural networks, in particular Echo State Networks (ESNs), have been applied for the prediction of Brugada Syndrome (BrS) from electrocardiogram (ECG) signals. The research lays its foundations in BrAID (Brugada syndrome and Artificial Intelligence applications to Diagnosis), a project aimed at developing an innovative system for early detection and classification of BrS Type 1. The ultimate objective of the BrAID platform is to help clinicians to improve the BrS diagnosis process, to detect a pattern in ECG, and to combine them with multi-omics information through Artificial Intelligence (AI) - Machine Learning (ML) models, such as ESNs. We report novel preliminary results of this approach, presenting the first baseline results, in terms of accuracy, for BrS recognition using ECG analysis, with the application of ESNs. Such results are particularly encouraging and may shed light on the possibility of using this model as a computational intelligence clinical support system tool for healthcare applications.
{"title":"A preliminary evaluation of Echo State Networks for Brugada syndrome classification","authors":"Giovanna Maria Dimitri, C. Gallicchio, A. Micheli, M.A. Morales, E. Ungaro, F. Vozzi","doi":"10.1109/SSCI50451.2021.9659966","DOIUrl":"https://doi.org/10.1109/SSCI50451.2021.9659966","url":null,"abstract":"In the present study recurrent neural networks, in particular Echo State Networks (ESNs), have been applied for the prediction of Brugada Syndrome (BrS) from electrocardiogram (ECG) signals. The research lays its foundations in BrAID (Brugada syndrome and Artificial Intelligence applications to Diagnosis), a project aimed at developing an innovative system for early detection and classification of BrS Type 1. The ultimate objective of the BrAID platform is to help clinicians to improve the BrS diagnosis process, to detect a pattern in ECG, and to combine them with multi-omics information through Artificial Intelligence (AI) - Machine Learning (ML) models, such as ESNs. We report novel preliminary results of this approach, presenting the first baseline results, in terms of accuracy, for BrS recognition using ECG analysis, with the application of ESNs. Such results are particularly encouraging and may shed light on the possibility of using this model as a computational intelligence clinical support system tool for healthcare applications.","PeriodicalId":255763,"journal":{"name":"2021 IEEE Symposium Series on Computational Intelligence (SSCI)","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123904207","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-05DOI: 10.1109/SSCI50451.2021.9659899
Brendan Alvey, Derek T. Anderson, Clare Yang, A. Buck, James M. Keller, Ken Yasuda, Hollie Ryan
Automatic object detection is one of the most common and fundamental tasks in computational intelligence (CI). Neural networks (NNs) are now often the tool of choice for this task. Unlike more traditional approaches that have interpretable parameters, explaining what a NN has learned and characterizing under what conditions the model does and does not perform well is a challenging, yet important task. The most straightforward approach to evaluate performance is to run test imagery through a model. However, the gaining popularity of self-supervised methods among big players such as Tesla and Google serve as evidence that labeled data is scarce in real-world settings. On the other hand, modern high-fidelity graphics simulation is now accessible and programmable, allowing for generation of large amounts of accurately labeled training and testing data for CI. Herein, we describe a framework to assess the performance of a NN model for automatic explosive hazard detection (EHD) from an unmanned aerial vehicle using simulation. The data was generated by the Unreal Engine with Microsoft's AirSim plugin. A workflow for generating simulated data and using it to assess and understand strengths and weaknesses in a learned EHD model is demonstrated.
{"title":"Characterization of Deep Learning-Based Aerial Explosive Hazard Detection using Simulated Data","authors":"Brendan Alvey, Derek T. Anderson, Clare Yang, A. Buck, James M. Keller, Ken Yasuda, Hollie Ryan","doi":"10.1109/SSCI50451.2021.9659899","DOIUrl":"https://doi.org/10.1109/SSCI50451.2021.9659899","url":null,"abstract":"Automatic object detection is one of the most common and fundamental tasks in computational intelligence (CI). Neural networks (NNs) are now often the tool of choice for this task. Unlike more traditional approaches that have interpretable parameters, explaining what a NN has learned and characterizing under what conditions the model does and does not perform well is a challenging, yet important task. The most straightforward approach to evaluate performance is to run test imagery through a model. However, the gaining popularity of self-supervised methods among big players such as Tesla and Google serve as evidence that labeled data is scarce in real-world settings. On the other hand, modern high-fidelity graphics simulation is now accessible and programmable, allowing for generation of large amounts of accurately labeled training and testing data for CI. Herein, we describe a framework to assess the performance of a NN model for automatic explosive hazard detection (EHD) from an unmanned aerial vehicle using simulation. The data was generated by the Unreal Engine with Microsoft's AirSim plugin. A workflow for generating simulated data and using it to assess and understand strengths and weaknesses in a learned EHD model is demonstrated.","PeriodicalId":255763,"journal":{"name":"2021 IEEE Symposium Series on Computational Intelligence (SSCI)","volume":"4 5","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120887202","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-05DOI: 10.1109/SSCI50451.2021.9659855
Ranjith Dinakaran, Li Zhang
Itis a challenging task to identify optimal network configurations for large-scale deep neural networks with cascaded structures. In this research, we propose a hybrid end-to-end model by integrating Deep Convolutional Generative Adversarial Network (DCGAN) with Single Shot Detector (SSD), for undertaking object detection. We subsequently employ the Particle Swarm Optimization (PSO) algorithm to conduct hyperparameter identification for the DCGAN-SSD model. The detected class labels as well as salient regional features are then used as inputs for a Long Short-Term Memory (LSTM) network for image description generation. Evaluated with a video data set in the wild, the empirical results indicate the efficiency of the proposed PSO-enhanced DCGAN-SSD object detector with respect to object detection and image description generation.
{"title":"Object Detection Using Deep Convolutional Generative Adversarial Networks Embedded Single Shot Detector with Hyper-parameter Optimization","authors":"Ranjith Dinakaran, Li Zhang","doi":"10.1109/SSCI50451.2021.9659855","DOIUrl":"https://doi.org/10.1109/SSCI50451.2021.9659855","url":null,"abstract":"Itis a challenging task to identify optimal network configurations for large-scale deep neural networks with cascaded structures. In this research, we propose a hybrid end-to-end model by integrating Deep Convolutional Generative Adversarial Network (DCGAN) with Single Shot Detector (SSD), for undertaking object detection. We subsequently employ the Particle Swarm Optimization (PSO) algorithm to conduct hyperparameter identification for the DCGAN-SSD model. The detected class labels as well as salient regional features are then used as inputs for a Long Short-Term Memory (LSTM) network for image description generation. Evaluated with a video data set in the wild, the empirical results indicate the efficiency of the proposed PSO-enhanced DCGAN-SSD object detector with respect to object detection and image description generation.","PeriodicalId":255763,"journal":{"name":"2021 IEEE Symposium Series on Computational Intelligence (SSCI)","volume":"155 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121523659","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-05DOI: 10.1109/SSCI50451.2021.9659894
R. Wang, X. Wang, Jiechao Yang, Liwen Kang
Complex engineering systems can be continuous, discrete, as well as hybrid. Comparing with other kinds of complex engineering systems, the hybrid one is the most difficult to be analyzed because of its combination of continuous and discrete behaviors of the system, as well as the internal and external uncertainties. It is necessary to model and simulate the complex engineering system to explore its dynamics and evaluate the effect of different management strategies. Existing modeling and simulation methods of complex engineering systems, especially hybrid system, mostly focus on solving problems in specific systems or scenarios, neglecting the reusability and simplicity of methods. In this paper, a universal hierarchical modeling method for complex engineering system is proposed, which illustrates how to deal with continuous and discrete dynamics, and hybrid simulation method is applied to verify the feasibility and availability. A copper smelter is taken as one case of complex engineering system, and the production process in the smelter is described by the proposed method. Results of simulation show that the universal hierarchical modeling method can describe complex dynamics in the system properly and simply, which contributes to the study on complex engineering system.
{"title":"A Hierarchical Modeling Method for Complex Engineering System with Hybrid Dynamics","authors":"R. Wang, X. Wang, Jiechao Yang, Liwen Kang","doi":"10.1109/SSCI50451.2021.9659894","DOIUrl":"https://doi.org/10.1109/SSCI50451.2021.9659894","url":null,"abstract":"Complex engineering systems can be continuous, discrete, as well as hybrid. Comparing with other kinds of complex engineering systems, the hybrid one is the most difficult to be analyzed because of its combination of continuous and discrete behaviors of the system, as well as the internal and external uncertainties. It is necessary to model and simulate the complex engineering system to explore its dynamics and evaluate the effect of different management strategies. Existing modeling and simulation methods of complex engineering systems, especially hybrid system, mostly focus on solving problems in specific systems or scenarios, neglecting the reusability and simplicity of methods. In this paper, a universal hierarchical modeling method for complex engineering system is proposed, which illustrates how to deal with continuous and discrete dynamics, and hybrid simulation method is applied to verify the feasibility and availability. A copper smelter is taken as one case of complex engineering system, and the production process in the smelter is described by the proposed method. Results of simulation show that the universal hierarchical modeling method can describe complex dynamics in the system properly and simply, which contributes to the study on complex engineering system.","PeriodicalId":255763,"journal":{"name":"2021 IEEE Symposium Series on Computational Intelligence (SSCI)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124310232","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-05DOI: 10.1109/SSCI50451.2021.9660060
H. Nieto-Chaupis
As seen in the event at Surfside, Miami FL, the permanent surveillance of tall buildings might to require the conjunction of the telecommunications technologies that in a side are offering accuracy in various levels of prediction, and on the other side are flexible to be operated to distance. In fact. in this paper, a novel Internet is proposed, it is called the Internet of Tall Buildings whose main purpose is to carry out in an unstoppable manner the precise measurement of small angular deviations. With this its is expected a kind of instantaneous radiography of spatial displacements of the vertexes of ceiling in a chain of tall buildings. Thus, whereas there is a sensing in one of them, then the wireless system will emit alerts and alarms at the cases that deviations beyond the allowed are registered.
{"title":"The Internet of Tall Buildings","authors":"H. Nieto-Chaupis","doi":"10.1109/SSCI50451.2021.9660060","DOIUrl":"https://doi.org/10.1109/SSCI50451.2021.9660060","url":null,"abstract":"As seen in the event at Surfside, Miami FL, the permanent surveillance of tall buildings might to require the conjunction of the telecommunications technologies that in a side are offering accuracy in various levels of prediction, and on the other side are flexible to be operated to distance. In fact. in this paper, a novel Internet is proposed, it is called the Internet of Tall Buildings whose main purpose is to carry out in an unstoppable manner the precise measurement of small angular deviations. With this its is expected a kind of instantaneous radiography of spatial displacements of the vertexes of ceiling in a chain of tall buildings. Thus, whereas there is a sensing in one of them, then the wireless system will emit alerts and alarms at the cases that deviations beyond the allowed are registered.","PeriodicalId":255763,"journal":{"name":"2021 IEEE Symposium Series on Computational Intelligence (SSCI)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124320117","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}