首页 > 最新文献

Journal of Supercomputing最新文献

英文 中文
Toward a mixed reality domain model for time-Sensitive applications using IoE infrastructure and edge computing (MRIoEF). 面向使用物联网基础设施和边缘计算(MRIoEF)的时间敏感应用的混合现实领域模型。
IF 3.3 3区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2022-01-01 Epub Date: 2022-01-24 DOI: 10.1007/s11227-022-04307-8
Mohamed Elawady, Amany Sarhan, Mahmoud A M Alshewimy

Mixed reality (MR) is one of the technologies with many challenges in the design and implementation phases, especially the problems associated with time-sensitive applications. The main objective of this paper is to introduce a conceptual model for MR application that gives MR application a new layer of interactivity by using Internet of things/Internet of everything models, which provide an improved quality of experience for end-users. The model supports the cloud and fog compute layers to give more functionalities that need more processing resources and reduce the latency problems for time-sensitive applications. Validation of the proposed model is performed via demonstrating a prototype of the model applied to a real-time case study and discussing how to enable standard technologies of the various components in the model. Moreover, it shows the applicability of the model, the ease of defining the roles, and the coherence of data or processes found in the most common applications.

混合现实(MR)是在设计和实现阶段面临许多挑战的技术之一,特别是与时间敏感型应用程序相关的问题。本文的主要目的是为磁共振应用引入一个概念模型,通过使用物联网/万物互联模型为磁共振应用提供一个新的交互性层,从而为最终用户提供更高质量的体验。该模型支持云和雾计算层,以提供更多需要更多处理资源的功能,并减少对时间敏感的应用程序的延迟问题。通过演示应用于实时案例研究的模型原型,并讨论如何启用模型中各种组件的标准技术,来验证所建议的模型。此外,它还显示了模型的适用性、定义角色的便利性以及在最常见的应用程序中发现的数据或过程的一致性。
{"title":"Toward a mixed reality domain model for time-Sensitive applications using IoE infrastructure and edge computing (MRIoEF).","authors":"Mohamed Elawady,&nbsp;Amany Sarhan,&nbsp;Mahmoud A M Alshewimy","doi":"10.1007/s11227-022-04307-8","DOIUrl":"https://doi.org/10.1007/s11227-022-04307-8","url":null,"abstract":"<p><p>Mixed reality (MR) is one of the technologies with many challenges in the design and implementation phases, especially the problems associated with time-sensitive applications. The main objective of this paper is to introduce a conceptual model for MR application that gives MR application a new layer of interactivity by using Internet of things/Internet of everything models, which provide an improved quality of experience for end-users. The model supports the cloud and fog compute layers to give more functionalities that need more processing resources and reduce the latency problems for time-sensitive applications. Validation of the proposed model is performed via demonstrating a prototype of the model applied to a real-time case study and discussing how to enable standard technologies of the various components in the model. Moreover, it shows the applicability of the model, the ease of defining the roles, and the coherence of data or processes found in the most common applications.</p>","PeriodicalId":50034,"journal":{"name":"Journal of Supercomputing","volume":"78 8","pages":"10656-10689"},"PeriodicalIF":3.3,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8785157/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39871361","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Hybrid-based framework for COVID-19 prediction via federated machine learning models. 基于联邦机器学习模型的COVID-19预测混合框架。
IF 3.3 3区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2022-01-01 Epub Date: 2021-11-05 DOI: 10.1007/s11227-021-04166-9
Ameni Kallel, Molka Rekik, Mahdi Khemakhem

The COronaVIrus Disease 2019 (COVID-19) pandemic is unfortunately highly transmissible across the people. In order to detect and track the suspected COVID-19 infected people and consequently limit the pandemic spread, this paper entails a framework integrating the machine learning (ML), cloud, fog, and Internet of Things (IoT) technologies to propose a novel smart COVID-19 disease monitoring and prognosis system. The proposal leverages the IoT devices that collect streaming data from both medical (e.g., X-ray machine, lung ultrasound machine, etc.) and non-medical (e.g., bracelet, smartwatch, etc.) devices. Moreover, the proposed hybrid fog-cloud framework provides two kinds of federated ML as a service (federated MLaaS); (i) the distributed batch MLaaS that is implemented on the cloud environment for a long-term decision-making, and (ii) the distributed stream MLaaS, which is installed into a hybrid fog-cloud environment for a short-term decision-making. The stream MLaaS uses a shared federated prediction model stored into the cloud, whereas the real-time symptom data processing and COVID-19 prediction are done into the fog. The federated ML models are determined after evaluating a set of both batch and stream ML algorithms from the Python's libraries. The evaluation considers both the quantitative (i.e., performance in terms of accuracy, precision, root mean squared error, and F1 score) and qualitative (i.e., quality of service in terms of server latency, response time, and network latency) metrics to assess these algorithms. This evaluation shows that the stream ML algorithms have the potential to be integrated into the COVID-19 prognosis allowing the early predictions of the suspected COVID-19 cases.

不幸的是,2019年冠状病毒病(COVID-19)大流行在人群中具有高度传染性。为了检测和跟踪疑似COVID-19感染者,从而限制大流行的传播,本文采用机器学习(ML)、云、雾和物联网(IoT)技术相结合的框架,提出了一种新型的智能COVID-19疾病监测和预后系统。该提案利用物联网设备从医疗设备(如x光机、肺超声机等)和非医疗设备(如手镯、智能手表等)收集流数据。此外,本文提出的混合雾云框架提供了两种联邦机器学习即服务(federated MLaaS);(1)部署在云环境下进行长期决策的分布式批处理MLaaS;(2)部署在混合雾云环境下进行短期决策的分布式流处理MLaaS。流MLaaS使用存储在云端的共享联邦预测模型,而实时症状数据处理和COVID-19预测则在雾中进行。联邦ML模型是在评估Python库中的批处理和流ML算法集之后确定的。评估考虑定量指标(即,准确度、精度、均方根误差和F1分数方面的性能)和定性指标(即,服务器延迟、响应时间和网络延迟方面的服务质量)来评估这些算法。该评估表明,流ML算法具有整合到COVID-19预后的潜力,可以对疑似COVID-19病例进行早期预测。
{"title":"Hybrid-based framework for COVID-19 prediction via federated machine learning models.","authors":"Ameni Kallel,&nbsp;Molka Rekik,&nbsp;Mahdi Khemakhem","doi":"10.1007/s11227-021-04166-9","DOIUrl":"https://doi.org/10.1007/s11227-021-04166-9","url":null,"abstract":"<p><p>The COronaVIrus Disease 2019 (COVID-19) pandemic is unfortunately highly transmissible across the people. In order to detect and track the suspected COVID-19 infected people and consequently limit the pandemic spread, this paper entails a framework integrating the machine learning (ML), cloud, fog, and Internet of Things (IoT) technologies to propose a novel smart COVID-19 disease monitoring and prognosis system. The proposal leverages the IoT devices that collect streaming data from both medical (e.g., X-ray machine, lung ultrasound machine, etc.) and non-medical (e.g., bracelet, smartwatch, etc.) devices. Moreover, the proposed hybrid fog-cloud framework provides two kinds of federated ML as a service (federated MLaaS); (i) the distributed batch MLaaS that is implemented on the cloud environment for a long-term decision-making, and (ii) the distributed stream MLaaS, which is installed into a hybrid fog-cloud environment for a short-term decision-making. The stream MLaaS uses a shared federated prediction model stored into the cloud, whereas the real-time symptom data processing and COVID-19 prediction are done into the fog. The federated ML models are determined after evaluating a set of both batch and stream ML algorithms from the Python's libraries. The evaluation considers both the quantitative (i.e., performance in terms of accuracy, precision, root mean squared error, and <i>F</i>1 score) and qualitative (i.e., quality of service in terms of server latency, response time, and network latency) metrics to assess these algorithms. This evaluation shows that the stream ML algorithms have the potential to be integrated into the COVID-19 prognosis allowing the early predictions of the suspected COVID-19 cases.</p>","PeriodicalId":50034,"journal":{"name":"Journal of Supercomputing","volume":"78 5","pages":"7078-7105"},"PeriodicalIF":3.3,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8570244/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39604811","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
Hybrid feature selection based on SLI and genetic algorithm for microarray datasets. 基于 SLI 和遗传算法的微阵列数据集混合特征选择。
IF 2.5 3区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2022-01-01 Epub Date: 2022-06-30 DOI: 10.1007/s11227-022-04650-w
Sedighe Abasabadi, Hossein Nematzadeh, Homayun Motameni, Ebrahim Akbari

One of the major problems in microarray datasets is the large number of features, which causes the issue of "the curse of dimensionality" when machine learning is applied to these datasets. Feature selection refers to the process of finding optimal feature set by removing irrelevant and redundant features. It has a significant role in pattern recognition, classification, and machine learning. In this study, a new and efficient hybrid feature selection method, called Garank&rand, is presented. The method combines a wrapper feature selection algorithm based on the genetic algorithm (GA) with a proposed filter feature selection method, SLI-γ. In Garank&rand, some initial solutions are built regarding the most relevant features based on SLI-γ, and the remaining ones are only the random features. Eleven high-dimensional and standard datasets were used for the accuracy evaluation of the proposed SLI-γ. Additionally, four high-dimensional well-known datasets of microarray experiments were used to carry out an extensive experimental study for the performance evaluation of Garank&rand. This experimental analysis showed the robustness of the method as well as its ability to obtain highly accurate solutions at the earlier stages of the GA evolutionary process. Finally, the performance of Garank&rand was also compared to the results of GA to highlight its competitiveness and its ability to successfully reduce the original feature set size and execution time.

微阵列数据集的主要问题之一是特征数量庞大,这导致机器学习应用于这些数据集时出现 "维度诅咒 "问题。特征选择是指通过去除无关特征和冗余特征来找到最佳特征集的过程。它在模式识别、分类和机器学习中发挥着重要作用。本研究提出了一种名为 Garank&rand 的新型高效混合特征选择方法。该方法结合了基于遗传算法(GA)的包装特征选择算法和建议的过滤特征选择方法 SLI-γ。在 Garank&rand 中,根据 SLI-γ 建立了一些与最相关特征有关的初始解,其余的只是随机特征。11 个高维标准数据集用于评估 SLI-γ 的准确性。此外,还使用了四个著名的高维微阵列实验数据集,对 Garank&rand 的性能评估进行了广泛的实验研究。实验分析表明了该方法的鲁棒性以及在 GA 进化过程的早期阶段获得高精度解的能力。最后,Garank&rand 的性能还与 GA 的结果进行了比较,以突出其竞争力及其成功减少原始特征集大小和执行时间的能力。
{"title":"Hybrid feature selection based on SLI and genetic algorithm for microarray datasets.","authors":"Sedighe Abasabadi, Hossein Nematzadeh, Homayun Motameni, Ebrahim Akbari","doi":"10.1007/s11227-022-04650-w","DOIUrl":"10.1007/s11227-022-04650-w","url":null,"abstract":"<p><p>One of the major problems in microarray datasets is the large number of features, which causes the issue of \"the curse of dimensionality\" when machine learning is applied to these datasets. Feature selection refers to the process of finding optimal feature set by removing irrelevant and redundant features. It has a significant role in pattern recognition, classification, and machine learning. In this study, a new and efficient hybrid feature selection method, called Ga<sub>rank&rand</sub>, is presented. The method combines a wrapper feature selection algorithm based on the genetic algorithm (GA) with a proposed filter feature selection method, SLI-<i>γ</i>. In Ga<sub>rank&rand</sub>, some initial solutions are built regarding the most relevant features based on SLI-<i>γ</i>, and the remaining ones are only the random features. Eleven high-dimensional and standard datasets were used for the accuracy evaluation of the proposed SLI-<i>γ</i>. Additionally, four high-dimensional well-known datasets of microarray experiments were used to carry out an extensive experimental study for the performance evaluation of Ga<sub>rank&rand</sub>. This experimental analysis showed the robustness of the method as well as its ability to obtain highly accurate solutions at the earlier stages of the GA evolutionary process. Finally, the performance of Ga<sub>rank&rand</sub> was also compared to the results of GA to highlight its competitiveness and its ability to successfully reduce the original feature set size and execution time.</p>","PeriodicalId":50034,"journal":{"name":"Journal of Supercomputing","volume":"78 18","pages":"19725-19753"},"PeriodicalIF":2.5,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9244444/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"40472361","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Corona virus optimization (CVO): a novel optimization algorithm inspired from the Corona virus pandemic. 冠状病毒优化(CVO):一种受冠状病毒大流行启发的新型优化算法。
IF 3.3 3区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2022-01-01 Epub Date: 2021-10-04 DOI: 10.1007/s11227-021-04100-z
Alireza Salehan, Arash Deldari

This research introduces a new probabilistic and meta-heuristic optimization approach inspired by the Corona virus pandemic. Corona is an infection that originates from an unknown animal virus, which is of three known types and COVID-19 has been rapidly spreading since late 2019. Based on the SIR model, the virus can easily transmit from one person to several, causing an epidemic over time. Considering the characteristics and behavior of this virus, the current paper presents an optimization algorithm called Corona virus optimization (CVO) which is feasible, effective, and applicable. A set of benchmark functions evaluates the performance of this algorithm for discrete and continuous problems by comparing the results with those of other well-known optimization algorithms. The CVO algorithm aims to find suitable solutions to application problems by solving several continuous mathematical functions as well as three continuous and discrete applications. Experimental results denote that the proposed optimization method has a credible, reasonable, and acceptable performance.

本研究引入了一种受冠状病毒大流行启发的新的概率和元启发式优化方法。冠状病毒是一种源于一种未知动物病毒的感染,它有三种已知类型,自2019年底以来,COVID-19一直在迅速传播。根据SIR模型,病毒可以很容易地从一个人传播给几个人,随着时间的推移导致流行病。针对该病毒的特点和行为,本文提出了一种可行、有效、适用的冠状病毒优化算法(CVO)。一组基准函数通过将结果与其他知名优化算法的结果进行比较来评估该算法在离散和连续问题上的性能。CVO算法旨在通过求解几个连续的数学函数以及三个连续和离散的应用来找到适合的应用问题的解。实验结果表明,该优化方法具有可靠、合理、可接受的性能。
{"title":"Corona virus optimization (CVO): a novel optimization algorithm inspired from the Corona virus pandemic.","authors":"Alireza Salehan,&nbsp;Arash Deldari","doi":"10.1007/s11227-021-04100-z","DOIUrl":"https://doi.org/10.1007/s11227-021-04100-z","url":null,"abstract":"<p><p>This research introduces a new probabilistic and meta-heuristic optimization approach inspired by the Corona virus pandemic. Corona is an infection that originates from an unknown animal virus, which is of three known types and COVID-19 has been rapidly spreading since late 2019. Based on the SIR model, the virus can easily transmit from one person to several, causing an epidemic over time. Considering the characteristics and behavior of this virus, the current paper presents an optimization algorithm called Corona virus optimization (CVO) which is feasible, effective, and applicable. A set of benchmark functions evaluates the performance of this algorithm for discrete and continuous problems by comparing the results with those of other well-known optimization algorithms. The CVO algorithm aims to find suitable solutions to application problems by solving several continuous mathematical functions as well as three continuous and discrete applications. Experimental results denote that the proposed optimization method has a credible, reasonable, and acceptable performance.</p>","PeriodicalId":50034,"journal":{"name":"Journal of Supercomputing","volume":"78 4","pages":"5712-5743"},"PeriodicalIF":3.3,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8489174/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39503123","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Supercomputing: 8th Russian Supercomputing Days, RuSCDays 2022, Moscow, Russia, September 26–27, 2022, Revised Selected Papers 超级计算:第八届俄罗斯超级计算日,RuSCDays 2022,莫斯科,俄罗斯,2022年9月26-27日,修订论文选集
IF 3.3 3区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2022-01-01 DOI: 10.1007/978-3-031-22941-1
{"title":"Supercomputing: 8th Russian Supercomputing Days, RuSCDays 2022, Moscow, Russia, September 26–27, 2022, Revised Selected Papers","authors":"","doi":"10.1007/978-3-031-22941-1","DOIUrl":"https://doi.org/10.1007/978-3-031-22941-1","url":null,"abstract":"","PeriodicalId":50034,"journal":{"name":"Journal of Supercomputing","volume":"26 1","pages":""},"PeriodicalIF":3.3,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76092102","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Top-k dominating queries on incomplete large dataset. 不完全大数据集的Top-k支配查询。
IF 3.3 3区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2022-01-01 Epub Date: 2021-08-17 DOI: 10.1007/s11227-021-04005-x
Jimmy Ming-Tai Wu, Min Wei, Mu-En Wu, Shahab Tayeb

Top-k dominating (TKD) query is one of the methods to find the interesting objects by returning the k objects that dominate other objects in a given dataset. Incomplete datasets have missing values in uncertain dimensions, so it is difficult to obtain useful information with traditional data mining methods on complete data. BitMap Index Guided Algorithm (BIG) is a good choice for solving this problem. However, it is even harder to find top-k dominance objects on incomplete big data. When the dataset is too large, the requirements for the feasibility and performance of the algorithm will become very high. In this paper, we proposed an algorithm to apply MapReduce on the whole process with a pruning strategy, called Efficient Hadoop BitMap Index Guided Algorithm (EHBIG). This algorithm can realize TKD query on incomplete datasets through BitMap Index and use MapReduce architecture to make TKD query possible on large datasets. By using the pruning strategy, the runtime and memory usage are greatly reduced. What's more, we also proposed an improved version of EHBIG (denoted as IEHBIG) which optimizes the whole algorithm flow. Our in-depth work in this article culminates with some experimental results that clearly show that our proposed algorithm can perform well on TKD query in an incomplete large dataset and shows great performance in a Hadoop computing cluster.

Top-k支配(TKD)查询是通过返回给定数据集中支配其他对象的k个对象来查找感兴趣对象的方法之一。不完整数据集在不确定维度上存在缺失值,传统的数据挖掘方法难以在完整数据上获得有用的信息。位图索引引导算法(BIG)是解决这一问题的一个很好的选择。然而,在不完全大数据中找到top-k优势对象就更难了。当数据集太大时,对算法的可行性和性能的要求会变得非常高。在本文中,我们提出了一种基于修剪策略的MapReduce全流程应用算法,称为高效Hadoop位图索引引导算法(EHBIG)。该算法通过BitMap Index实现对不完整数据集的TKD查询,并利用MapReduce架构实现对大型数据集的TKD查询。通过使用剪枝策略,大大减少了运行时和内存的使用。此外,我们还提出了EHBIG的改进版本(记为IEHBIG),对整个算法流程进行了优化。我们在本文中的深入工作以一些实验结果告终,这些实验结果清楚地表明,我们提出的算法可以在不完整的大型数据集中很好地执行TKD查询,并在Hadoop计算集群中显示出出色的性能。
{"title":"Top-<i>k</i> dominating queries on incomplete large dataset.","authors":"Jimmy Ming-Tai Wu,&nbsp;Min Wei,&nbsp;Mu-En Wu,&nbsp;Shahab Tayeb","doi":"10.1007/s11227-021-04005-x","DOIUrl":"https://doi.org/10.1007/s11227-021-04005-x","url":null,"abstract":"<p><p>Top-<i>k</i> dominating (TKD) query is one of the methods to find the interesting objects by returning the <i>k</i> objects that dominate other objects in a given dataset. Incomplete datasets have missing values in uncertain dimensions, so it is difficult to obtain useful information with traditional data mining methods on complete data. BitMap Index Guided Algorithm (BIG) is a good choice for solving this problem. However, it is even harder to find top-<i>k</i> dominance objects on incomplete big data. When the dataset is too large, the requirements for the feasibility and performance of the algorithm will become very high. In this paper, we proposed an algorithm to apply MapReduce on the whole process with a pruning strategy, called Efficient Hadoop BitMap Index Guided Algorithm (EHBIG). This algorithm can realize TKD query on incomplete datasets through BitMap Index and use MapReduce architecture to make TKD query possible on large datasets. By using the pruning strategy, the runtime and memory usage are greatly reduced. What's more, we also proposed an improved version of EHBIG (denoted as IEHBIG) which optimizes the whole algorithm flow. Our in-depth work in this article culminates with some experimental results that clearly show that our proposed algorithm can perform well on TKD query in an incomplete large dataset and shows great performance in a Hadoop computing cluster.</p>","PeriodicalId":50034,"journal":{"name":"Journal of Supercomputing","volume":"78 3","pages":"3976-3997"},"PeriodicalIF":3.3,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/s11227-021-04005-x","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39336200","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
A deep learning approach in predicting products' sentiment ratings: a comparative analysis. 预测产品情绪评级的深度学习方法:比较分析。
IF 3.3 3区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2022-01-01 Epub Date: 2021-11-05 DOI: 10.1007/s11227-021-04169-6
Vimala Balakrishnan, Zhongliang Shi, Chuan Liang Law, Regine Lim, Lee Leng Teh, Yue Fan

We present a benchmark comparison of several deep learning models including Convolutional Neural Networks, Recurrent Neural Network and Bi-directional Long Short Term Memory, assessed based on various word embedding approaches, including the Bi-directional Encoder Representations from Transformers (BERT) and its variants, FastText and Word2Vec. Data augmentation was administered using the Easy Data Augmentation approach resulting in two datasets (original versus augmented). All the models were assessed in two setups, namely 5-class versus 3-class (i.e., compressed version). Findings show the best prediction models were Neural Network-based using Word2Vec, with CNN-RNN-Bi-LSTM producing the highest accuracy (96%) and F-score (91.1%). Individually, RNN was the best model with an accuracy of 87.5% and F-score of 83.5%, while RoBERTa had the best F-score of 73.1%. The study shows that deep learning is better for analyzing the sentiments within the text compared to supervised machine learning and provides a direction for future work and research.

我们提出了几种深度学习模型的基准比较,包括卷积神经网络,循环神经网络和双向长短期记忆,基于各种词嵌入方法进行评估,包括来自变形器(BERT)及其变体,FastText和Word2Vec的双向编码器表示。使用简易数据增强方法进行数据增强,产生两个数据集(原始数据集与增强数据集)。所有模型以两种设置进行评估,即5级与3级(即压缩版本)。结果表明,基于Word2Vec的神经网络预测模型效果最好,其中CNN-RNN-Bi-LSTM预测准确率最高(96%),f值最高(91.1%)。单独来看,RNN是最好的模型,准确率为87.5%,f值为83.5%,RoBERTa的f值为73.1%。研究表明,与有监督的机器学习相比,深度学习更适合分析文本中的情感,并为未来的工作和研究提供了方向。
{"title":"A deep learning approach in predicting products' sentiment ratings: a comparative analysis.","authors":"Vimala Balakrishnan,&nbsp;Zhongliang Shi,&nbsp;Chuan Liang Law,&nbsp;Regine Lim,&nbsp;Lee Leng Teh,&nbsp;Yue Fan","doi":"10.1007/s11227-021-04169-6","DOIUrl":"https://doi.org/10.1007/s11227-021-04169-6","url":null,"abstract":"<p><p>We present a benchmark comparison of several deep learning models including Convolutional Neural Networks, Recurrent Neural Network and Bi-directional Long Short Term Memory, assessed based on various word embedding approaches, including the Bi-directional Encoder Representations from Transformers (BERT) and its variants, FastText and Word2Vec. Data augmentation was administered using the Easy Data Augmentation approach resulting in two datasets (original versus augmented). All the models were assessed in two setups, namely 5-class versus 3-class (i.e., compressed version). Findings show the best prediction models were Neural Network-based using Word2Vec, with CNN-RNN-Bi-LSTM producing the highest accuracy (96%) and <i>F</i>-score (91.1%). Individually, RNN was the best model with an accuracy of 87.5% and <i>F</i>-score of 83.5%, while RoBERTa had the best <i>F</i>-score of 73.1%. The study shows that deep learning is better for analyzing the sentiments within the text compared to supervised machine learning and provides a direction for future work and research.</p>","PeriodicalId":50034,"journal":{"name":"Journal of Supercomputing","volume":"78 5","pages":"7206-7226"},"PeriodicalIF":3.3,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8569508/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39604810","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
Kinematic and dynamic control model of wheeled mobile robot under internet of things and neural network. 基于物联网和神经网络的轮式移动机器人运动学与动力学控制模型。
IF 3.3 3区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2022-01-01 Epub Date: 2022-01-12 DOI: 10.1007/s11227-021-04160-1
Qiang Liu, Qun Cong

This study aims to solve the issues of nonlinearity, non-integrity constraints, under-actuated systems in mobile robots. The wheeled robot is selected as the research object, and a kinematic and dynamic control model based on Internet of Things (IoT) and neural network is proposed. With the help of IoT sensors, the proposed model can realize effective control of the mobile robot under the premise of ensuring safety using the model tracking scheme and the radial basis function adaptive control algorithm. The results show that the robot can be controlled effectively to break the speed and acceleration constraints using the strategy based on the model predictive control, thus realizing smooth movement under the premise of safety. The self-adapting algorithm based on the IoT and neural network shows notable advantages in parameter uncertainty and roller skidding well. The proposed model algorithm shows a fast convergence rate of about 2 s, which has effectively improved performances in trajectory tracking and robustness of the wheeled mobile robot, and can solve the difficulties of wheeled mobile robots in practical applications, showing reliable reference value for algorithm research in this field.

本研究旨在解决移动机器人系统的非线性、非完整约束、欠驱动等问题。以轮式机器人为研究对象,提出了一种基于物联网和神经网络的轮式机器人运动学和动力学控制模型。该模型借助物联网传感器,利用模型跟踪方案和径向基函数自适应控制算法,在保证安全的前提下,实现对移动机器人的有效控制。结果表明,基于模型预测控制的策略可以有效地控制机器人打破速度和加速度约束,从而在保证安全的前提下实现平稳运动。基于物联网和神经网络的自适应算法在参数不确定性和滚轮打滑方面具有明显的优势。所提出的模型算法收敛速度快,约为2s,有效提高了轮式移动机器人的轨迹跟踪性能和鲁棒性,解决了轮式移动机器人在实际应用中的难点,对该领域的算法研究具有可靠的参考价值。
{"title":"Kinematic and dynamic control model of wheeled mobile robot under internet of things and neural network.","authors":"Qiang Liu,&nbsp;Qun Cong","doi":"10.1007/s11227-021-04160-1","DOIUrl":"https://doi.org/10.1007/s11227-021-04160-1","url":null,"abstract":"<p><p>This study aims to solve the issues of nonlinearity, non-integrity constraints, under-actuated systems in mobile robots. The wheeled robot is selected as the research object, and a kinematic and dynamic control model based on Internet of Things (IoT) and neural network is proposed. With the help of IoT sensors, the proposed model can realize effective control of the mobile robot under the premise of ensuring safety using the model tracking scheme and the radial basis function adaptive control algorithm. The results show that the robot can be controlled effectively to break the speed and acceleration constraints using the strategy based on the model predictive control, thus realizing smooth movement under the premise of safety. The self-adapting algorithm based on the IoT and neural network shows notable advantages in parameter uncertainty and roller skidding well. The proposed model algorithm shows a fast convergence rate of about 2 s, which has effectively improved performances in trajectory tracking and robustness of the wheeled mobile robot, and can solve the difficulties of wheeled mobile robots in practical applications, showing reliable reference value for algorithm research in this field.</p>","PeriodicalId":50034,"journal":{"name":"Journal of Supercomputing","volume":"78 6","pages":"8678-8707"},"PeriodicalIF":3.3,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8752188/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39824690","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
COVID-19 pandemic, predictions and control in Saudi Arabia using SIR-F and age-structured SEIR model. 使用SIR-F和年龄结构SEIR模型的沙特阿拉伯COVID-19大流行,预测和控制。
IF 3.3 3区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2022-01-01 Epub Date: 2021-11-10 DOI: 10.1007/s11227-021-04149-w
C Anand Deva Durai, Arshiya Begum, Jemima Jebaseeli, Asfia Sabahath

COVID-19 has affected every individual physically or physiologically, leading to substantial impacts on how they perceive and respond to the pandemic's danger. Due to the lack of vaccines or effective medicines to cure the infection, an urgent control measure is required to prevent the continued spread of COVID-19. This can be achieved using advanced computing, such as artificial intelligence (AI), machine learning (ML), deep learning (DL), cloud computing, and edge computing. To control the exponential spread of the novel virus, it is crucial for countries to contain and mitigate interventions. To prevent exponential growth, several control measures have been applied in the Kingdom of Saudi Arabia to mitigate the COVID-19 epidemic. As the pandemic has been spreading globally for more than a year, an ample amount of data is available for researchers to predict and forecast the effect of the pandemic in the near future. This article interprets the effects of COVID-19 using the Susceptible-Infected-Recovered (SIR-F) while F-stands for 'Fatal with confirmation,' age-structured SEIR (Susceptible Exposed Infectious Removed) and machine learning for smart health care and the well-being of citizens of Saudi Arabia. Additionally, it examines the different control measure scenarios produced by the modified SEIR model. The evolution of the simulation results shows that the interventions are vital to flatten the virus spread curve, which can delay the peak and decrease the fatality rate.

COVID-19对每个人的身体或生理都产生了影响,对他们如何看待和应对大流行的危险产生了重大影响。由于缺乏治疗感染的疫苗或有效药物,需要采取紧急控制措施,以防止COVID-19的持续传播。这可以通过高级计算来实现,例如人工智能(AI)、机器学习(ML)、深度学习(DL)、云计算和边缘计算。为了控制这种新型病毒的指数传播,各国必须遏制和减轻干预措施。为防止指数级增长,沙特阿拉伯王国采取了若干控制措施,以缓解COVID-19疫情。由于疫情在全球范围内蔓延了一年多,研究人员可以获得大量数据来预测和预测疫情在不久的将来的影响。本文使用易感-感染-康复(SIR-F)来解释COVID-19的影响,而f代表“确认死亡”,年龄结构SEIR(易感暴露感染移除)和机器学习,用于智能医疗保健和沙特阿拉伯公民的福祉。此外,它还检查了由改进的SEIR模型产生的不同控制措施情景。模拟结果的演化表明,干预措施对平缓病毒传播曲线至关重要,可以延缓病毒传播高峰,降低病死率。
{"title":"COVID-19 pandemic, predictions and control in Saudi Arabia using SIR-F and age-structured SEIR model.","authors":"C Anand Deva Durai,&nbsp;Arshiya Begum,&nbsp;Jemima Jebaseeli,&nbsp;Asfia Sabahath","doi":"10.1007/s11227-021-04149-w","DOIUrl":"https://doi.org/10.1007/s11227-021-04149-w","url":null,"abstract":"<p><p>COVID-19 has affected every individual physically or physiologically, leading to substantial impacts on how they perceive and respond to the pandemic's danger. Due to the lack of vaccines or effective medicines to cure the infection, an urgent control measure is required to prevent the continued spread of COVID-19. This can be achieved using advanced computing, such as artificial intelligence (AI), machine learning (ML), deep learning (DL), cloud computing, and edge computing. To control the exponential spread of the novel virus, it is crucial for countries to contain and mitigate interventions. To prevent exponential growth, several control measures have been applied in the Kingdom of Saudi Arabia to mitigate the COVID-19 epidemic. As the pandemic has been spreading globally for more than a year, an ample amount of data is available for researchers to predict and forecast the effect of the pandemic in the near future. This article interprets the effects of COVID-19 using the Susceptible-Infected-Recovered (SIR-F) while F-stands for 'Fatal with confirmation,' age-structured SEIR (Susceptible Exposed Infectious Removed) and machine learning for smart health care and the well-being of citizens of Saudi Arabia. Additionally, it examines the different control measure scenarios produced by the modified SEIR model. The evolution of the simulation results shows that the interventions are vital to flatten the virus spread curve, which can delay the peak and decrease the fatality rate.</p>","PeriodicalId":50034,"journal":{"name":"Journal of Supercomputing","volume":"78 5","pages":"7341-7353"},"PeriodicalIF":3.3,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8579411/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39875783","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
In situ visualization of large-scale turbulence simulations in Nek5000 with ParaView Catalyst. 利用ParaView Catalyst在Nek5000上进行大规模湍流模拟的现场可视化。
IF 3.3 3区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2022-01-01 Epub Date: 2021-08-02 DOI: 10.1007/s11227-021-03990-3
Marco Atzori, Wiebke Köpp, Steven W D Chien, Daniele Massaro, Fermín Mallor, Adam Peplinski, Mohamad Rezaei, Niclas Jansson, Stefano Markidis, Ricardo Vinuesa, Erwin Laure, Philipp Schlatter, Tino Weinkauf

In situ visualization on high-performance computing systems allows us to analyze simulation results that would otherwise be impossible, given the size of the simulation data sets and offline post-processing execution time. We develop an in situ adaptor for Paraview Catalyst and Nek5000, a massively parallel Fortran and C code for computational fluid dynamics. We perform a strong scalability test up to 2048 cores on KTH's Beskow Cray XC40 supercomputer and assess in situ visualization's impact on the Nek5000 performance. In our study case, a high-fidelity simulation of turbulent flow, we observe that in situ operations significantly limit the strong scalability of the code, reducing the relative parallel efficiency to only 21 % on 2048 cores (the relative efficiency of Nek5000 without in situ operations is 99 % ). Through profiling with Arm MAP, we identified a bottleneck in the image composition step (that uses the Radix-kr algorithm) where a majority of the time is spent on MPI communication. We also identified an imbalance of in situ processing time between rank 0 and all other ranks. In our case, better scaling and load-balancing in the parallel image composition would considerably improve the performance of Nek5000 with in situ capabilities. In general, the result of this study highlights the technical challenges posed by the integration of high-performance simulation codes and data-analysis libraries and their practical use in complex cases, even when efficient algorithms already exist for a certain application scenario.

在高性能计算系统上的现场可视化使我们能够分析模拟结果,否则这是不可能的,因为模拟数据集的大小和离线后处理执行时间。我们为Paraview Catalyst和Nek5000开发了一个原位适配器,Nek5000是一个用于计算流体动力学的大规模并行Fortran和C代码。我们在KTH的Beskow Cray XC40超级计算机上进行了高达2048核的强大可扩展性测试,并评估了现场可视化对Nek5000性能的影响。在我们的研究案例中,湍流的高保真度模拟,我们观察到原位操作显著限制了代码的强大可扩展性,在2048核上将相对并行效率降低到仅≈21%(不进行原位操作的Nek5000的相对效率为≈99%)。通过使用Arm MAP进行分析,我们发现了图像合成步骤(使用Radix-kr算法)中的瓶颈,其中大部分时间都花在MPI通信上。我们还确定了0级和所有其他级别之间原地处理时间的不平衡。在我们的例子中,在并行图像合成中更好的缩放和负载平衡将大大提高具有原位功能的Nek5000的性能。总的来说,本研究的结果强调了高性能仿真代码和数据分析库的集成所带来的技术挑战,以及它们在复杂情况下的实际应用,即使在某些应用场景中已经存在有效的算法。
{"title":"In situ visualization of large-scale turbulence simulations in Nek5000 with ParaView Catalyst.","authors":"Marco Atzori,&nbsp;Wiebke Köpp,&nbsp;Steven W D Chien,&nbsp;Daniele Massaro,&nbsp;Fermín Mallor,&nbsp;Adam Peplinski,&nbsp;Mohamad Rezaei,&nbsp;Niclas Jansson,&nbsp;Stefano Markidis,&nbsp;Ricardo Vinuesa,&nbsp;Erwin Laure,&nbsp;Philipp Schlatter,&nbsp;Tino Weinkauf","doi":"10.1007/s11227-021-03990-3","DOIUrl":"https://doi.org/10.1007/s11227-021-03990-3","url":null,"abstract":"<p><p>In situ visualization on high-performance computing systems allows us to analyze simulation results that would otherwise be impossible, given the size of the simulation data sets and offline post-processing execution time. We develop an in situ adaptor for Paraview Catalyst and Nek5000, a massively parallel Fortran and C code for computational fluid dynamics. We perform a strong scalability test up to 2048 cores on KTH's Beskow Cray XC40 supercomputer and assess in situ visualization's impact on the Nek5000 performance. In our study case, a high-fidelity simulation of turbulent flow, we observe that in situ operations significantly limit the strong scalability of the code, reducing the relative parallel efficiency to only <math><mrow><mo>≈</mo> <mn>21</mn> <mo>%</mo></mrow> </math> on 2048 cores (the relative efficiency of Nek5000 without in situ operations is <math><mrow><mo>≈</mo> <mn>99</mn> <mo>%</mo></mrow> </math> ). Through profiling with Arm MAP, we identified a bottleneck in the image composition step (that uses the Radix-kr algorithm) where a majority of the time is spent on MPI communication. We also identified an imbalance of in situ processing time between rank 0 and all other ranks. In our case, better scaling and load-balancing in the parallel image composition would considerably improve the performance of Nek5000 with in situ capabilities. In general, the result of this study highlights the technical challenges posed by the integration of high-performance simulation codes and data-analysis libraries and their practical use in complex cases, even when efficient algorithms already exist for a certain application scenario.</p>","PeriodicalId":50034,"journal":{"name":"Journal of Supercomputing","volume":"78 3","pages":"3605-3620"},"PeriodicalIF":3.3,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/s11227-021-03990-3","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39959161","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
期刊
Journal of Supercomputing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1