Pub Date : 2022-01-01Epub Date: 2022-01-24DOI: 10.1007/s11227-022-04307-8
Mohamed Elawady, Amany Sarhan, Mahmoud A M Alshewimy
Mixed reality (MR) is one of the technologies with many challenges in the design and implementation phases, especially the problems associated with time-sensitive applications. The main objective of this paper is to introduce a conceptual model for MR application that gives MR application a new layer of interactivity by using Internet of things/Internet of everything models, which provide an improved quality of experience for end-users. The model supports the cloud and fog compute layers to give more functionalities that need more processing resources and reduce the latency problems for time-sensitive applications. Validation of the proposed model is performed via demonstrating a prototype of the model applied to a real-time case study and discussing how to enable standard technologies of the various components in the model. Moreover, it shows the applicability of the model, the ease of defining the roles, and the coherence of data or processes found in the most common applications.
{"title":"Toward a mixed reality domain model for time-Sensitive applications using IoE infrastructure and edge computing (MRIoEF).","authors":"Mohamed Elawady, Amany Sarhan, Mahmoud A M Alshewimy","doi":"10.1007/s11227-022-04307-8","DOIUrl":"https://doi.org/10.1007/s11227-022-04307-8","url":null,"abstract":"<p><p>Mixed reality (MR) is one of the technologies with many challenges in the design and implementation phases, especially the problems associated with time-sensitive applications. The main objective of this paper is to introduce a conceptual model for MR application that gives MR application a new layer of interactivity by using Internet of things/Internet of everything models, which provide an improved quality of experience for end-users. The model supports the cloud and fog compute layers to give more functionalities that need more processing resources and reduce the latency problems for time-sensitive applications. Validation of the proposed model is performed via demonstrating a prototype of the model applied to a real-time case study and discussing how to enable standard technologies of the various components in the model. Moreover, it shows the applicability of the model, the ease of defining the roles, and the coherence of data or processes found in the most common applications.</p>","PeriodicalId":50034,"journal":{"name":"Journal of Supercomputing","volume":"78 8","pages":"10656-10689"},"PeriodicalIF":3.3,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8785157/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39871361","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-01-01Epub Date: 2021-11-05DOI: 10.1007/s11227-021-04166-9
Ameni Kallel, Molka Rekik, Mahdi Khemakhem
The COronaVIrus Disease 2019 (COVID-19) pandemic is unfortunately highly transmissible across the people. In order to detect and track the suspected COVID-19 infected people and consequently limit the pandemic spread, this paper entails a framework integrating the machine learning (ML), cloud, fog, and Internet of Things (IoT) technologies to propose a novel smart COVID-19 disease monitoring and prognosis system. The proposal leverages the IoT devices that collect streaming data from both medical (e.g., X-ray machine, lung ultrasound machine, etc.) and non-medical (e.g., bracelet, smartwatch, etc.) devices. Moreover, the proposed hybrid fog-cloud framework provides two kinds of federated ML as a service (federated MLaaS); (i) the distributed batch MLaaS that is implemented on the cloud environment for a long-term decision-making, and (ii) the distributed stream MLaaS, which is installed into a hybrid fog-cloud environment for a short-term decision-making. The stream MLaaS uses a shared federated prediction model stored into the cloud, whereas the real-time symptom data processing and COVID-19 prediction are done into the fog. The federated ML models are determined after evaluating a set of both batch and stream ML algorithms from the Python's libraries. The evaluation considers both the quantitative (i.e., performance in terms of accuracy, precision, root mean squared error, and F1 score) and qualitative (i.e., quality of service in terms of server latency, response time, and network latency) metrics to assess these algorithms. This evaluation shows that the stream ML algorithms have the potential to be integrated into the COVID-19 prognosis allowing the early predictions of the suspected COVID-19 cases.
{"title":"Hybrid-based framework for COVID-19 prediction via federated machine learning models.","authors":"Ameni Kallel, Molka Rekik, Mahdi Khemakhem","doi":"10.1007/s11227-021-04166-9","DOIUrl":"https://doi.org/10.1007/s11227-021-04166-9","url":null,"abstract":"<p><p>The COronaVIrus Disease 2019 (COVID-19) pandemic is unfortunately highly transmissible across the people. In order to detect and track the suspected COVID-19 infected people and consequently limit the pandemic spread, this paper entails a framework integrating the machine learning (ML), cloud, fog, and Internet of Things (IoT) technologies to propose a novel smart COVID-19 disease monitoring and prognosis system. The proposal leverages the IoT devices that collect streaming data from both medical (e.g., X-ray machine, lung ultrasound machine, etc.) and non-medical (e.g., bracelet, smartwatch, etc.) devices. Moreover, the proposed hybrid fog-cloud framework provides two kinds of federated ML as a service (federated MLaaS); (i) the distributed batch MLaaS that is implemented on the cloud environment for a long-term decision-making, and (ii) the distributed stream MLaaS, which is installed into a hybrid fog-cloud environment for a short-term decision-making. The stream MLaaS uses a shared federated prediction model stored into the cloud, whereas the real-time symptom data processing and COVID-19 prediction are done into the fog. The federated ML models are determined after evaluating a set of both batch and stream ML algorithms from the Python's libraries. The evaluation considers both the quantitative (i.e., performance in terms of accuracy, precision, root mean squared error, and <i>F</i>1 score) and qualitative (i.e., quality of service in terms of server latency, response time, and network latency) metrics to assess these algorithms. This evaluation shows that the stream ML algorithms have the potential to be integrated into the COVID-19 prognosis allowing the early predictions of the suspected COVID-19 cases.</p>","PeriodicalId":50034,"journal":{"name":"Journal of Supercomputing","volume":"78 5","pages":"7078-7105"},"PeriodicalIF":3.3,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8570244/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39604811","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
One of the major problems in microarray datasets is the large number of features, which causes the issue of "the curse of dimensionality" when machine learning is applied to these datasets. Feature selection refers to the process of finding optimal feature set by removing irrelevant and redundant features. It has a significant role in pattern recognition, classification, and machine learning. In this study, a new and efficient hybrid feature selection method, called Garank&rand, is presented. The method combines a wrapper feature selection algorithm based on the genetic algorithm (GA) with a proposed filter feature selection method, SLI-γ. In Garank&rand, some initial solutions are built regarding the most relevant features based on SLI-γ, and the remaining ones are only the random features. Eleven high-dimensional and standard datasets were used for the accuracy evaluation of the proposed SLI-γ. Additionally, four high-dimensional well-known datasets of microarray experiments were used to carry out an extensive experimental study for the performance evaluation of Garank&rand. This experimental analysis showed the robustness of the method as well as its ability to obtain highly accurate solutions at the earlier stages of the GA evolutionary process. Finally, the performance of Garank&rand was also compared to the results of GA to highlight its competitiveness and its ability to successfully reduce the original feature set size and execution time.
微阵列数据集的主要问题之一是特征数量庞大,这导致机器学习应用于这些数据集时出现 "维度诅咒 "问题。特征选择是指通过去除无关特征和冗余特征来找到最佳特征集的过程。它在模式识别、分类和机器学习中发挥着重要作用。本研究提出了一种名为 Garank&rand 的新型高效混合特征选择方法。该方法结合了基于遗传算法(GA)的包装特征选择算法和建议的过滤特征选择方法 SLI-γ。在 Garank&rand 中,根据 SLI-γ 建立了一些与最相关特征有关的初始解,其余的只是随机特征。11 个高维标准数据集用于评估 SLI-γ 的准确性。此外,还使用了四个著名的高维微阵列实验数据集,对 Garank&rand 的性能评估进行了广泛的实验研究。实验分析表明了该方法的鲁棒性以及在 GA 进化过程的早期阶段获得高精度解的能力。最后,Garank&rand 的性能还与 GA 的结果进行了比较,以突出其竞争力及其成功减少原始特征集大小和执行时间的能力。
{"title":"Hybrid feature selection based on SLI and genetic algorithm for microarray datasets.","authors":"Sedighe Abasabadi, Hossein Nematzadeh, Homayun Motameni, Ebrahim Akbari","doi":"10.1007/s11227-022-04650-w","DOIUrl":"10.1007/s11227-022-04650-w","url":null,"abstract":"<p><p>One of the major problems in microarray datasets is the large number of features, which causes the issue of \"the curse of dimensionality\" when machine learning is applied to these datasets. Feature selection refers to the process of finding optimal feature set by removing irrelevant and redundant features. It has a significant role in pattern recognition, classification, and machine learning. In this study, a new and efficient hybrid feature selection method, called Ga<sub>rank&rand</sub>, is presented. The method combines a wrapper feature selection algorithm based on the genetic algorithm (GA) with a proposed filter feature selection method, SLI-<i>γ</i>. In Ga<sub>rank&rand</sub>, some initial solutions are built regarding the most relevant features based on SLI-<i>γ</i>, and the remaining ones are only the random features. Eleven high-dimensional and standard datasets were used for the accuracy evaluation of the proposed SLI-<i>γ</i>. Additionally, four high-dimensional well-known datasets of microarray experiments were used to carry out an extensive experimental study for the performance evaluation of Ga<sub>rank&rand</sub>. This experimental analysis showed the robustness of the method as well as its ability to obtain highly accurate solutions at the earlier stages of the GA evolutionary process. Finally, the performance of Ga<sub>rank&rand</sub> was also compared to the results of GA to highlight its competitiveness and its ability to successfully reduce the original feature set size and execution time.</p>","PeriodicalId":50034,"journal":{"name":"Journal of Supercomputing","volume":"78 18","pages":"19725-19753"},"PeriodicalIF":2.5,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9244444/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"40472361","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-01-01Epub Date: 2021-10-04DOI: 10.1007/s11227-021-04100-z
Alireza Salehan, Arash Deldari
This research introduces a new probabilistic and meta-heuristic optimization approach inspired by the Corona virus pandemic. Corona is an infection that originates from an unknown animal virus, which is of three known types and COVID-19 has been rapidly spreading since late 2019. Based on the SIR model, the virus can easily transmit from one person to several, causing an epidemic over time. Considering the characteristics and behavior of this virus, the current paper presents an optimization algorithm called Corona virus optimization (CVO) which is feasible, effective, and applicable. A set of benchmark functions evaluates the performance of this algorithm for discrete and continuous problems by comparing the results with those of other well-known optimization algorithms. The CVO algorithm aims to find suitable solutions to application problems by solving several continuous mathematical functions as well as three continuous and discrete applications. Experimental results denote that the proposed optimization method has a credible, reasonable, and acceptable performance.
{"title":"Corona virus optimization (CVO): a novel optimization algorithm inspired from the Corona virus pandemic.","authors":"Alireza Salehan, Arash Deldari","doi":"10.1007/s11227-021-04100-z","DOIUrl":"https://doi.org/10.1007/s11227-021-04100-z","url":null,"abstract":"<p><p>This research introduces a new probabilistic and meta-heuristic optimization approach inspired by the Corona virus pandemic. Corona is an infection that originates from an unknown animal virus, which is of three known types and COVID-19 has been rapidly spreading since late 2019. Based on the SIR model, the virus can easily transmit from one person to several, causing an epidemic over time. Considering the characteristics and behavior of this virus, the current paper presents an optimization algorithm called Corona virus optimization (CVO) which is feasible, effective, and applicable. A set of benchmark functions evaluates the performance of this algorithm for discrete and continuous problems by comparing the results with those of other well-known optimization algorithms. The CVO algorithm aims to find suitable solutions to application problems by solving several continuous mathematical functions as well as three continuous and discrete applications. Experimental results denote that the proposed optimization method has a credible, reasonable, and acceptable performance.</p>","PeriodicalId":50034,"journal":{"name":"Journal of Supercomputing","volume":"78 4","pages":"5712-5743"},"PeriodicalIF":3.3,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8489174/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39503123","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-01-01Epub Date: 2021-08-17DOI: 10.1007/s11227-021-04005-x
Jimmy Ming-Tai Wu, Min Wei, Mu-En Wu, Shahab Tayeb
Top-k dominating (TKD) query is one of the methods to find the interesting objects by returning the k objects that dominate other objects in a given dataset. Incomplete datasets have missing values in uncertain dimensions, so it is difficult to obtain useful information with traditional data mining methods on complete data. BitMap Index Guided Algorithm (BIG) is a good choice for solving this problem. However, it is even harder to find top-k dominance objects on incomplete big data. When the dataset is too large, the requirements for the feasibility and performance of the algorithm will become very high. In this paper, we proposed an algorithm to apply MapReduce on the whole process with a pruning strategy, called Efficient Hadoop BitMap Index Guided Algorithm (EHBIG). This algorithm can realize TKD query on incomplete datasets through BitMap Index and use MapReduce architecture to make TKD query possible on large datasets. By using the pruning strategy, the runtime and memory usage are greatly reduced. What's more, we also proposed an improved version of EHBIG (denoted as IEHBIG) which optimizes the whole algorithm flow. Our in-depth work in this article culminates with some experimental results that clearly show that our proposed algorithm can perform well on TKD query in an incomplete large dataset and shows great performance in a Hadoop computing cluster.
{"title":"Top-<i>k</i> dominating queries on incomplete large dataset.","authors":"Jimmy Ming-Tai Wu, Min Wei, Mu-En Wu, Shahab Tayeb","doi":"10.1007/s11227-021-04005-x","DOIUrl":"https://doi.org/10.1007/s11227-021-04005-x","url":null,"abstract":"<p><p>Top-<i>k</i> dominating (TKD) query is one of the methods to find the interesting objects by returning the <i>k</i> objects that dominate other objects in a given dataset. Incomplete datasets have missing values in uncertain dimensions, so it is difficult to obtain useful information with traditional data mining methods on complete data. BitMap Index Guided Algorithm (BIG) is a good choice for solving this problem. However, it is even harder to find top-<i>k</i> dominance objects on incomplete big data. When the dataset is too large, the requirements for the feasibility and performance of the algorithm will become very high. In this paper, we proposed an algorithm to apply MapReduce on the whole process with a pruning strategy, called Efficient Hadoop BitMap Index Guided Algorithm (EHBIG). This algorithm can realize TKD query on incomplete datasets through BitMap Index and use MapReduce architecture to make TKD query possible on large datasets. By using the pruning strategy, the runtime and memory usage are greatly reduced. What's more, we also proposed an improved version of EHBIG (denoted as IEHBIG) which optimizes the whole algorithm flow. Our in-depth work in this article culminates with some experimental results that clearly show that our proposed algorithm can perform well on TKD query in an incomplete large dataset and shows great performance in a Hadoop computing cluster.</p>","PeriodicalId":50034,"journal":{"name":"Journal of Supercomputing","volume":"78 3","pages":"3976-3997"},"PeriodicalIF":3.3,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/s11227-021-04005-x","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39336200","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-01-01Epub Date: 2021-11-05DOI: 10.1007/s11227-021-04169-6
Vimala Balakrishnan, Zhongliang Shi, Chuan Liang Law, Regine Lim, Lee Leng Teh, Yue Fan
We present a benchmark comparison of several deep learning models including Convolutional Neural Networks, Recurrent Neural Network and Bi-directional Long Short Term Memory, assessed based on various word embedding approaches, including the Bi-directional Encoder Representations from Transformers (BERT) and its variants, FastText and Word2Vec. Data augmentation was administered using the Easy Data Augmentation approach resulting in two datasets (original versus augmented). All the models were assessed in two setups, namely 5-class versus 3-class (i.e., compressed version). Findings show the best prediction models were Neural Network-based using Word2Vec, with CNN-RNN-Bi-LSTM producing the highest accuracy (96%) and F-score (91.1%). Individually, RNN was the best model with an accuracy of 87.5% and F-score of 83.5%, while RoBERTa had the best F-score of 73.1%. The study shows that deep learning is better for analyzing the sentiments within the text compared to supervised machine learning and provides a direction for future work and research.
{"title":"A deep learning approach in predicting products' sentiment ratings: a comparative analysis.","authors":"Vimala Balakrishnan, Zhongliang Shi, Chuan Liang Law, Regine Lim, Lee Leng Teh, Yue Fan","doi":"10.1007/s11227-021-04169-6","DOIUrl":"https://doi.org/10.1007/s11227-021-04169-6","url":null,"abstract":"<p><p>We present a benchmark comparison of several deep learning models including Convolutional Neural Networks, Recurrent Neural Network and Bi-directional Long Short Term Memory, assessed based on various word embedding approaches, including the Bi-directional Encoder Representations from Transformers (BERT) and its variants, FastText and Word2Vec. Data augmentation was administered using the Easy Data Augmentation approach resulting in two datasets (original versus augmented). All the models were assessed in two setups, namely 5-class versus 3-class (i.e., compressed version). Findings show the best prediction models were Neural Network-based using Word2Vec, with CNN-RNN-Bi-LSTM producing the highest accuracy (96%) and <i>F</i>-score (91.1%). Individually, RNN was the best model with an accuracy of 87.5% and <i>F</i>-score of 83.5%, while RoBERTa had the best <i>F</i>-score of 73.1%. The study shows that deep learning is better for analyzing the sentiments within the text compared to supervised machine learning and provides a direction for future work and research.</p>","PeriodicalId":50034,"journal":{"name":"Journal of Supercomputing","volume":"78 5","pages":"7206-7226"},"PeriodicalIF":3.3,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8569508/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39604810","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-01-01Epub Date: 2022-01-12DOI: 10.1007/s11227-021-04160-1
Qiang Liu, Qun Cong
This study aims to solve the issues of nonlinearity, non-integrity constraints, under-actuated systems in mobile robots. The wheeled robot is selected as the research object, and a kinematic and dynamic control model based on Internet of Things (IoT) and neural network is proposed. With the help of IoT sensors, the proposed model can realize effective control of the mobile robot under the premise of ensuring safety using the model tracking scheme and the radial basis function adaptive control algorithm. The results show that the robot can be controlled effectively to break the speed and acceleration constraints using the strategy based on the model predictive control, thus realizing smooth movement under the premise of safety. The self-adapting algorithm based on the IoT and neural network shows notable advantages in parameter uncertainty and roller skidding well. The proposed model algorithm shows a fast convergence rate of about 2 s, which has effectively improved performances in trajectory tracking and robustness of the wheeled mobile robot, and can solve the difficulties of wheeled mobile robots in practical applications, showing reliable reference value for algorithm research in this field.
{"title":"Kinematic and dynamic control model of wheeled mobile robot under internet of things and neural network.","authors":"Qiang Liu, Qun Cong","doi":"10.1007/s11227-021-04160-1","DOIUrl":"https://doi.org/10.1007/s11227-021-04160-1","url":null,"abstract":"<p><p>This study aims to solve the issues of nonlinearity, non-integrity constraints, under-actuated systems in mobile robots. The wheeled robot is selected as the research object, and a kinematic and dynamic control model based on Internet of Things (IoT) and neural network is proposed. With the help of IoT sensors, the proposed model can realize effective control of the mobile robot under the premise of ensuring safety using the model tracking scheme and the radial basis function adaptive control algorithm. The results show that the robot can be controlled effectively to break the speed and acceleration constraints using the strategy based on the model predictive control, thus realizing smooth movement under the premise of safety. The self-adapting algorithm based on the IoT and neural network shows notable advantages in parameter uncertainty and roller skidding well. The proposed model algorithm shows a fast convergence rate of about 2 s, which has effectively improved performances in trajectory tracking and robustness of the wheeled mobile robot, and can solve the difficulties of wheeled mobile robots in practical applications, showing reliable reference value for algorithm research in this field.</p>","PeriodicalId":50034,"journal":{"name":"Journal of Supercomputing","volume":"78 6","pages":"8678-8707"},"PeriodicalIF":3.3,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8752188/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39824690","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-01-01Epub Date: 2021-11-10DOI: 10.1007/s11227-021-04149-w
C Anand Deva Durai, Arshiya Begum, Jemima Jebaseeli, Asfia Sabahath
COVID-19 has affected every individual physically or physiologically, leading to substantial impacts on how they perceive and respond to the pandemic's danger. Due to the lack of vaccines or effective medicines to cure the infection, an urgent control measure is required to prevent the continued spread of COVID-19. This can be achieved using advanced computing, such as artificial intelligence (AI), machine learning (ML), deep learning (DL), cloud computing, and edge computing. To control the exponential spread of the novel virus, it is crucial for countries to contain and mitigate interventions. To prevent exponential growth, several control measures have been applied in the Kingdom of Saudi Arabia to mitigate the COVID-19 epidemic. As the pandemic has been spreading globally for more than a year, an ample amount of data is available for researchers to predict and forecast the effect of the pandemic in the near future. This article interprets the effects of COVID-19 using the Susceptible-Infected-Recovered (SIR-F) while F-stands for 'Fatal with confirmation,' age-structured SEIR (Susceptible Exposed Infectious Removed) and machine learning for smart health care and the well-being of citizens of Saudi Arabia. Additionally, it examines the different control measure scenarios produced by the modified SEIR model. The evolution of the simulation results shows that the interventions are vital to flatten the virus spread curve, which can delay the peak and decrease the fatality rate.
{"title":"COVID-19 pandemic, predictions and control in Saudi Arabia using SIR-F and age-structured SEIR model.","authors":"C Anand Deva Durai, Arshiya Begum, Jemima Jebaseeli, Asfia Sabahath","doi":"10.1007/s11227-021-04149-w","DOIUrl":"https://doi.org/10.1007/s11227-021-04149-w","url":null,"abstract":"<p><p>COVID-19 has affected every individual physically or physiologically, leading to substantial impacts on how they perceive and respond to the pandemic's danger. Due to the lack of vaccines or effective medicines to cure the infection, an urgent control measure is required to prevent the continued spread of COVID-19. This can be achieved using advanced computing, such as artificial intelligence (AI), machine learning (ML), deep learning (DL), cloud computing, and edge computing. To control the exponential spread of the novel virus, it is crucial for countries to contain and mitigate interventions. To prevent exponential growth, several control measures have been applied in the Kingdom of Saudi Arabia to mitigate the COVID-19 epidemic. As the pandemic has been spreading globally for more than a year, an ample amount of data is available for researchers to predict and forecast the effect of the pandemic in the near future. This article interprets the effects of COVID-19 using the Susceptible-Infected-Recovered (SIR-F) while F-stands for 'Fatal with confirmation,' age-structured SEIR (Susceptible Exposed Infectious Removed) and machine learning for smart health care and the well-being of citizens of Saudi Arabia. Additionally, it examines the different control measure scenarios produced by the modified SEIR model. The evolution of the simulation results shows that the interventions are vital to flatten the virus spread curve, which can delay the peak and decrease the fatality rate.</p>","PeriodicalId":50034,"journal":{"name":"Journal of Supercomputing","volume":"78 5","pages":"7341-7353"},"PeriodicalIF":3.3,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8579411/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39875783","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-01-01Epub Date: 2021-08-02DOI: 10.1007/s11227-021-03990-3
Marco Atzori, Wiebke Köpp, Steven W D Chien, Daniele Massaro, Fermín Mallor, Adam Peplinski, Mohamad Rezaei, Niclas Jansson, Stefano Markidis, Ricardo Vinuesa, Erwin Laure, Philipp Schlatter, Tino Weinkauf
In situ visualization on high-performance computing systems allows us to analyze simulation results that would otherwise be impossible, given the size of the simulation data sets and offline post-processing execution time. We develop an in situ adaptor for Paraview Catalyst and Nek5000, a massively parallel Fortran and C code for computational fluid dynamics. We perform a strong scalability test up to 2048 cores on KTH's Beskow Cray XC40 supercomputer and assess in situ visualization's impact on the Nek5000 performance. In our study case, a high-fidelity simulation of turbulent flow, we observe that in situ operations significantly limit the strong scalability of the code, reducing the relative parallel efficiency to only on 2048 cores (the relative efficiency of Nek5000 without in situ operations is ). Through profiling with Arm MAP, we identified a bottleneck in the image composition step (that uses the Radix-kr algorithm) where a majority of the time is spent on MPI communication. We also identified an imbalance of in situ processing time between rank 0 and all other ranks. In our case, better scaling and load-balancing in the parallel image composition would considerably improve the performance of Nek5000 with in situ capabilities. In general, the result of this study highlights the technical challenges posed by the integration of high-performance simulation codes and data-analysis libraries and their practical use in complex cases, even when efficient algorithms already exist for a certain application scenario.
{"title":"In situ visualization of large-scale turbulence simulations in Nek5000 with ParaView Catalyst.","authors":"Marco Atzori, Wiebke Köpp, Steven W D Chien, Daniele Massaro, Fermín Mallor, Adam Peplinski, Mohamad Rezaei, Niclas Jansson, Stefano Markidis, Ricardo Vinuesa, Erwin Laure, Philipp Schlatter, Tino Weinkauf","doi":"10.1007/s11227-021-03990-3","DOIUrl":"https://doi.org/10.1007/s11227-021-03990-3","url":null,"abstract":"<p><p>In situ visualization on high-performance computing systems allows us to analyze simulation results that would otherwise be impossible, given the size of the simulation data sets and offline post-processing execution time. We develop an in situ adaptor for Paraview Catalyst and Nek5000, a massively parallel Fortran and C code for computational fluid dynamics. We perform a strong scalability test up to 2048 cores on KTH's Beskow Cray XC40 supercomputer and assess in situ visualization's impact on the Nek5000 performance. In our study case, a high-fidelity simulation of turbulent flow, we observe that in situ operations significantly limit the strong scalability of the code, reducing the relative parallel efficiency to only <math><mrow><mo>≈</mo> <mn>21</mn> <mo>%</mo></mrow> </math> on 2048 cores (the relative efficiency of Nek5000 without in situ operations is <math><mrow><mo>≈</mo> <mn>99</mn> <mo>%</mo></mrow> </math> ). Through profiling with Arm MAP, we identified a bottleneck in the image composition step (that uses the Radix-kr algorithm) where a majority of the time is spent on MPI communication. We also identified an imbalance of in situ processing time between rank 0 and all other ranks. In our case, better scaling and load-balancing in the parallel image composition would considerably improve the performance of Nek5000 with in situ capabilities. In general, the result of this study highlights the technical challenges posed by the integration of high-performance simulation codes and data-analysis libraries and their practical use in complex cases, even when efficient algorithms already exist for a certain application scenario.</p>","PeriodicalId":50034,"journal":{"name":"Journal of Supercomputing","volume":"78 3","pages":"3605-3620"},"PeriodicalIF":3.3,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/s11227-021-03990-3","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39959161","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}