首页 > 最新文献

Journal of Supercomputing最新文献

英文 中文
A new Apache Spark-based framework for big data streaming forecasting in IoT networks. 一个新的基于Apache spark的框架,用于物联网网络中的大数据流预测。
IF 3.3 3区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2023-01-01 DOI: 10.1007/s11227-023-05100-x
Antonio M Fernández-Gómez, David Gutiérrez-Avilés, Alicia Troncoso, Francisco Martínez-Álvarez

Analyzing time-dependent data acquired in a continuous flow is a major challenge for various fields, such as big data and machine learning. Being able to analyze a large volume of data from various sources, such as sensors, networks, and the internet, is essential for improving the efficiency of our society's production processes. Additionally, this vast amount of data is collected dynamically in a continuous stream. The goal of this research is to provide a comprehensive framework for forecasting big data streams from Internet of Things networks and serve as a guide for designing and deploying other third-party solutions. Hence, a new framework for time series forecasting in a big data streaming scenario, using data collected from Internet of Things networks, is presented. This framework comprises of five main modules: Internet of Things network design and deployment, big data streaming architecture, stream data modeling method, big data forecasting method, and a comprehensive real-world application scenario, consisting of a physical Internet of Things network feeding the big data streaming architecture, being the linear regression the algorithm used for illustrative purposes. Comparison with other frameworks reveals that this is the first framework that incorporates and integrates all the aforementioned modules.

分析在连续流中获取的时间相关数据是许多领域的主要挑战,例如大数据和机器学习。能够分析来自各种来源的大量数据,如传感器、网络和互联网,对于提高我们社会生产过程的效率至关重要。此外,这些大量的数据是在连续流中动态收集的。本研究的目标是为预测来自物联网网络的大数据流提供一个全面的框架,并作为设计和部署其他第三方解决方案的指南。因此,本文提出了一种利用物联网网络收集的数据在大数据流场景下进行时间序列预测的新框架。该框架包括物联网网络设计与部署、大数据流架构、流数据建模方法、大数据预测方法和一个全面的现实应用场景五个主要模块,由一个物理的物联网网络喂养大数据流架构,作为线性回归算法用于说明。与其他框架的比较表明,这是第一个包含并集成了上述所有模块的框架。
{"title":"A new Apache Spark-based framework for big data streaming forecasting in IoT networks.","authors":"Antonio M Fernández-Gómez,&nbsp;David Gutiérrez-Avilés,&nbsp;Alicia Troncoso,&nbsp;Francisco Martínez-Álvarez","doi":"10.1007/s11227-023-05100-x","DOIUrl":"https://doi.org/10.1007/s11227-023-05100-x","url":null,"abstract":"<p><p>Analyzing time-dependent data acquired in a continuous flow is a major challenge for various fields, such as big data and machine learning. Being able to analyze a large volume of data from various sources, such as sensors, networks, and the internet, is essential for improving the efficiency of our society's production processes. Additionally, this vast amount of data is collected dynamically in a continuous stream. The goal of this research is to provide a comprehensive framework for forecasting big data streams from Internet of Things networks and serve as a guide for designing and deploying other third-party solutions. Hence, a new framework for time series forecasting in a big data streaming scenario, using data collected from Internet of Things networks, is presented. This framework comprises of five main modules: Internet of Things network design and deployment, big data streaming architecture, stream data modeling method, big data forecasting method, and a comprehensive real-world application scenario, consisting of a physical Internet of Things network feeding the big data streaming architecture, being the linear regression the algorithm used for illustrative purposes. Comparison with other frameworks reveals that this is the first framework that incorporates and integrates all the aforementioned modules.</p>","PeriodicalId":50034,"journal":{"name":"Journal of Supercomputing","volume":"79 10","pages":"11078-11100"},"PeriodicalIF":3.3,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9942040/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9502933","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Anti-aliasing convolution neural network of finger vein recognition for virtual reality (VR) human-robot equipment of metaverse. 用于元宇宙虚拟现实(VR)人机设备的手指静脉识别抗锯齿卷积神经网络。
IF 2.5 3区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2023-01-01 Epub Date: 2022-08-22 DOI: 10.1007/s11227-022-04680-4
Nghi C Tran, Jian-Hong Wang, Toan H Vu, Tzu-Chiang Tai, Jia-Ching Wang

Metaverse, which is anticipated to be the future of the internet, is a 3D virtual world in which users interact via highly customizable computer avatars. It is considerably promising for several industries, including gaming, education, and business. However, it still has drawbacks, particularly in the privacy and identity threads. When a person joins the metaverse via a virtual reality (VR) human-robot equipment, their avatar, digital assets, and private information may be compromised by cybercriminals. This paper introduces a specific Finger Vein Recognition approach for the virtual reality (VR) human-robot equipment of the metaverse of the Metaverse to prevent others from misappropriating it. Finger vein is a is a biometric feature hidden beneath our skin. It is considerably more secure in person verification than other hand-based biometric characteristics such as finger print and palm print since it is difficult to imitate. Most conventional finger vein recognition systems that use hand-crafted features are ineffective, especially for images with low quality, low contrast, scale variation, translation, and rotation. Deep learning methods have been demonstrated to be more successful than traditional methods in computer vision. This paper develops a finger vein recognition system based on a convolution neural network and anti-aliasing technique. We employ/ utilize a contrast image enhancement algorithm in the preprocessing step to improve performance of the system. The proposed approach is evaluated on three publicly available finger vein datasets. Experimental results show that our proposed method outperforms the current state-of-the-art methods, improvement of 97.66% accuracy on FVUSM dataset, 99.94% accuracy on SDUMLA dataset, and 88.19% accuracy on THUFV2 dataset.

Metaverse 被认为是互联网的未来,它是一个三维虚拟世界,用户在其中通过高度可定制的计算机化身进行互动。它对包括游戏、教育和商业在内的多个行业都大有可为。然而,它仍然存在缺点,特别是在隐私和身份方面。当一个人通过虚拟现实(VR)人机设备加入元宇宙时,他的化身、数字资产和私人信息可能会被网络犯罪分子泄露。本文介绍了一种针对元宇宙虚拟现实(VR)人机设备的特定指静脉识别方法,以防止他人盗用。指静脉是一种隐藏在皮肤下的生物特征。由于指静脉难以模仿,因此它比其他基于手的生物特征(如指纹和掌纹)更安全。大多数传统的指静脉识别系统都使用手工制作的特征,但效果不佳,尤其是在图像质量低、对比度低、尺度变化、平移和旋转的情况下。在计算机视觉领域,深度学习方法已被证明比传统方法更成功。本文开发了一种基于卷积神经网络和抗锯齿技术的手指静脉识别系统。我们在预处理步骤中采用了对比度图像增强算法,以提高系统的性能。我们在三个公开的手指静脉数据集上对所提出的方法进行了评估。实验结果表明,我们提出的方法优于目前最先进的方法,在 FVUSM 数据集上提高了 97.66% 的准确率,在 SDUMLA 数据集上提高了 99.94% 的准确率,在 THUFV2 数据集上提高了 88.19% 的准确率。
{"title":"Anti-aliasing convolution neural network of finger vein recognition for virtual reality (VR) human-robot equipment of metaverse.","authors":"Nghi C Tran, Jian-Hong Wang, Toan H Vu, Tzu-Chiang Tai, Jia-Ching Wang","doi":"10.1007/s11227-022-04680-4","DOIUrl":"10.1007/s11227-022-04680-4","url":null,"abstract":"<p><p>Metaverse, which is anticipated to be the future of the internet, is a 3D virtual world in which users interact via highly customizable computer avatars. It is considerably promising for several industries, including gaming, education, and business. However, it still has drawbacks, particularly in the privacy and identity threads. When a person joins the metaverse via a virtual reality (VR) human-robot equipment, their avatar, digital assets, and private information may be compromised by cybercriminals. This paper introduces a specific Finger Vein Recognition approach for the virtual reality (VR) human-robot equipment of the metaverse of the Metaverse to prevent others from misappropriating it. Finger vein is a is a biometric feature hidden beneath our skin. It is considerably more secure in person verification than other hand-based biometric characteristics such as finger print and palm print since it is difficult to imitate. Most conventional finger vein recognition systems that use hand-crafted features are ineffective, especially for images with low quality, low contrast, scale variation, translation, and rotation. Deep learning methods have been demonstrated to be more successful than traditional methods in computer vision. This paper develops a finger vein recognition system based on a convolution neural network and anti-aliasing technique. We employ/ utilize a contrast image enhancement algorithm in the preprocessing step to improve performance of the system. The proposed approach is evaluated on three publicly available finger vein datasets. Experimental results show that our proposed method outperforms the current state-of-the-art methods, improvement of 97.66% accuracy on FVUSM dataset, 99.94% accuracy on SDUMLA dataset, and 88.19% accuracy on THUFV2 dataset.</p>","PeriodicalId":50034,"journal":{"name":"Journal of Supercomputing","volume":"79 3","pages":"2767-2782"},"PeriodicalIF":2.5,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9395830/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9093328","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hybrid optimization and ontology-based semantic model for efficient text-based information retrieval. 基于本体的混合优化语义模型在文本信息检索中的应用。
IF 3.3 3区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2023-01-01 DOI: 10.1007/s11227-022-04708-9
Ram Kumar, S C Sharma

Query expansion is an important approach utilized to improve the efficiency of data retrieval tasks. Numerous works are carried out by the researchers to generate fair constructive results; however, they do not provide acceptable results for all kinds of queries particularly phrase and individual queries. The utilization of identical data sources and weighting strategies for expanding such terms are the major cause of this issue which leads the model unable to capture the comprehensive relationship between the query terms. In order to tackle this issue, we developed a novel approach for query expansion technique to analyze the different data sources namely WordNet, Wikipedia, and Text REtrieval Conference. This paper presents an Improved Aquila Optimization-based COOT(IAOCOOT) algorithm for query expansion which retrieves the semantic aspects that match the query term. The semantic heterogeneity associated with document retrieval mainly impacts the relevance matching between the query and the document. The main cause of this issue is that the similarity among the words is not evaluated correctly. To overcome this problem, we are using a Modified Needleman Wunsch algorithm algorithm to deal with the problems of uncertainty, imprecision in the information retrieval process, and semantic ambiguity of indexed terms in both the local and global perspectives. The k most similar word is determined and returned from a candidate set through the top-k words selection technique and it is widely utilized in different tasks. The proposed IAOCOOT model is evaluated using different standard Information Retrieval performance metrics to compute the validity of the proposed work by comparing it with other state-of-art techniques.

查询扩展是提高数据检索任务效率的一种重要方法。研究人员进行了大量的工作,以产生公平的建设性结果;然而,它们不能为所有类型的查询提供可接受的结果,尤其是短语和单个查询。使用相同的数据源和加权策略来扩展这些术语是导致该问题的主要原因,导致模型无法捕获查询术语之间的全面关系。为了解决这个问题,我们开发了一种新的查询扩展技术来分析不同的数据源,即WordNet、Wikipedia和Text REtrieval Conference。本文提出了一种改进的基于Aquila优化的COOT(IAOCOOT)查询扩展算法,用于检索与查询项匹配的语义方面。与文档检索相关的语义异构性主要影响查询与文档之间的相关性匹配。造成这一问题的主要原因是没有正确评估单词之间的相似性。为了克服这一问题,我们使用了一种改进的Needleman Wunsch算法来处理信息检索过程中的不确定性、不精确性以及索引项在局部和全局两方面的语义歧义问题。通过top-k单词选择技术从候选集中确定并返回k个最相似的单词,并广泛应用于不同的任务中。使用不同的标准信息检索性能指标来评估所提出的IAOCOOT模型,通过将其与其他最先进的技术进行比较来计算所提出工作的有效性。
{"title":"Hybrid optimization and ontology-based semantic model for efficient text-based information retrieval.","authors":"Ram Kumar,&nbsp;S C Sharma","doi":"10.1007/s11227-022-04708-9","DOIUrl":"https://doi.org/10.1007/s11227-022-04708-9","url":null,"abstract":"<p><p>Query expansion is an important approach utilized to improve the efficiency of data retrieval tasks. Numerous works are carried out by the researchers to generate fair constructive results; however, they do not provide acceptable results for all kinds of queries particularly phrase and individual queries. The utilization of identical data sources and weighting strategies for expanding such terms are the major cause of this issue which leads the model unable to capture the comprehensive relationship between the query terms. In order to tackle this issue, we developed a novel approach for query expansion technique to analyze the different data sources namely WordNet, Wikipedia, and Text REtrieval Conference. This paper presents an Improved Aquila Optimization-based COOT(IAOCOOT) algorithm for query expansion which retrieves the semantic aspects that match the query term. The semantic heterogeneity associated with document retrieval mainly impacts the relevance matching between the query and the document. The main cause of this issue is that the similarity among the words is not evaluated correctly. To overcome this problem, we are using a Modified Needleman Wunsch algorithm algorithm to deal with the problems of uncertainty, imprecision in the information retrieval process, and semantic ambiguity of indexed terms in both the local and global perspectives. The k most similar word is determined and returned from a candidate set through the top-k words selection technique and it is widely utilized in different tasks. The proposed IAOCOOT model is evaluated using different standard Information Retrieval performance metrics to compute the validity of the proposed work by comparing it with other state-of-art techniques.</p>","PeriodicalId":50034,"journal":{"name":"Journal of Supercomputing","volume":"79 2","pages":"2251-2280"},"PeriodicalIF":3.3,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9364863/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10582958","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
IoT-fog-based healthcare 4.0 system using blockchain technology. 使用区块链技术的基于物联网雾的医疗4.0系统。
IF 3.3 3区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2023-01-01 DOI: 10.1007/s11227-022-04788-7
Israr Ahmad, Saima Abdullah, Adeel Ahmed

Real-time tracking and surveillance of patients' health has become ubiquitous in the healthcare sector as a result of the development of fog, cloud computing, and Internet of Things (IoT) technologies. Medical IoT (MIoT) equipment often transfers health data to a pharmaceutical data center, where it is saved, evaluated, and made available to relevant stakeholders or users. Fog layers have been utilized to increase the scalability and flexibility of IoT-based healthcare services, by providing quick response times and low latency. Our proposed solution focuses on an electronic healthcare system that manages both critical and non-critical patients simultaneously. Fog layer is distributed into two halves: critical fog cluster and non-critical fog cluster. Critical patients are handled at critical fog clusters for quick response, while non-critical patients are handled using blockchain technology at non-critical fog cluster, which protects the privacy of patient health records. The suggested solution requires little modification to the current IoT ecosystem while decrease the response time for critical messages and offloading the cloud infrastructure. Reduced storage requirements for cloud data centers benefit users in addition to saving money on construction and operating expenses. In addition, we examined the proposed work for recall, accuracy, precision, and F-score. The results show that the suggested approach is successful in protecting privacy while retaining standard network settings. Moreover, suggested system and benchmark are evaluated in terms of system response time, drop rate, throughput, fog, and cloud utilization. Evaluated results clearly indicate the performance of proposed system is better than benchmark.

由于雾计算、云计算和物联网(IoT)技术的发展,对患者健康状况的实时跟踪和监控在医疗保健领域变得无处不在。医疗物联网(MIoT)设备通常将健康数据传输到制药数据中心,在那里保存、评估并提供给相关利益相关者或用户。雾层已被用于通过提供快速响应时间和低延迟来提高基于物联网的医疗保健服务的可扩展性和灵活性。我们提出的解决方案侧重于同时管理重症和非重症患者的电子医疗保健系统。雾层分为临界雾团和非临界雾团两半。危重患者在关键雾群处理,快速响应,非关键患者在非关键雾群使用区块链技术处理,保护患者健康记录的隐私。建议的解决方案几乎不需要对当前的物联网生态系统进行修改,同时减少关键消息的响应时间并卸载云基础设施。云数据中心的存储需求减少,除了节省建设和运营费用外,还使用户受益。此外,我们检查了建议的工作,召回,准确性,精度和f分。结果表明,该方法在保留标准网络设置的同时成功地保护了隐私。此外,根据系统响应时间、掉包率、吞吐量、雾和云利用率对建议的系统和基准进行评估。评估结果清楚地表明,该系统的性能优于基准测试。
{"title":"IoT-fog-based healthcare 4.0 system using blockchain technology.","authors":"Israr Ahmad,&nbsp;Saima Abdullah,&nbsp;Adeel Ahmed","doi":"10.1007/s11227-022-04788-7","DOIUrl":"https://doi.org/10.1007/s11227-022-04788-7","url":null,"abstract":"<p><p>Real-time tracking and surveillance of patients' health has become ubiquitous in the healthcare sector as a result of the development of fog, cloud computing, and Internet of Things (IoT) technologies. Medical IoT (MIoT) equipment often transfers health data to a pharmaceutical data center, where it is saved, evaluated, and made available to relevant stakeholders or users. Fog layers have been utilized to increase the scalability and flexibility of IoT-based healthcare services, by providing quick response times and low latency. Our proposed solution focuses on an electronic healthcare system that manages both critical and non-critical patients simultaneously. Fog layer is distributed into two halves: critical fog cluster and non-critical fog cluster. Critical patients are handled at critical fog clusters for quick response, while non-critical patients are handled using blockchain technology at non-critical fog cluster, which protects the privacy of patient health records. The suggested solution requires little modification to the current IoT ecosystem while decrease the response time for critical messages and offloading the cloud infrastructure. Reduced storage requirements for cloud data centers benefit users in addition to saving money on construction and operating expenses. In addition, we examined the proposed work for recall, accuracy, precision, and F-score. The results show that the suggested approach is successful in protecting privacy while retaining standard network settings. Moreover, suggested system and benchmark are evaluated in terms of system response time, drop rate, throughput, fog, and cloud utilization. Evaluated results clearly indicate the performance of proposed system is better than benchmark.</p>","PeriodicalId":50034,"journal":{"name":"Journal of Supercomputing","volume":"79 4","pages":"3999-4020"},"PeriodicalIF":3.3,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9483278/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10631897","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Improving drug discovery through parallelism. 通过并行改进药物发现。
IF 3.3 3区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2023-01-01 DOI: 10.1007/s11227-022-05014-0
Jerónimo S García, Savíns Puertas-Martín, Juana L Redondo, Juan José Moreno, Pilar M Ortigosa

Compound identification in ligand-based virtual screening is limited by two key issues: the quality and the time needed to obtain predictions. In this sense, we designed OptiPharm, an algorithm that obtained excellent results in improving the sequential methods in the literature. In this work, we go a step further and propose its parallelization. Specifically, we propose a two-layer parallelization. Firstly, an automation of the molecule distribution process between the available nodes in a cluster, and secondly, a parallelization of the internal methods (initialization, reproduction, selection and optimization). This new software, called pOptiPharm, aims to improve the quality of predictions and reduce experimentation time. As the results show, the performance of the proposed methods is good. It can find better solutions than the sequential OptiPharm, all while reducing its computation time almost proportionally to the number of processing units considered.

基于配体的虚拟筛选中的化合物鉴定受到两个关键问题的限制:质量和获得预测所需的时间。在这个意义上,我们设计了OptiPharm算法,该算法在改进文献中的顺序方法方面取得了很好的效果。在这项工作中,我们进一步提出了它的并行化。具体来说,我们提出了一种双层并行化。首先,实现了集群中可用节点间分子分布过程的自动化,其次,实现了内部方法(初始化、复制、选择和优化)的并行化。这款名为pOptiPharm的新软件旨在提高预测的质量,缩短实验时间。结果表明,所提方法具有良好的性能。它可以找到比顺序OptiPharm更好的解决方案,同时几乎与所考虑的处理单元数量成比例地减少了计算时间。
{"title":"Improving drug discovery through parallelism.","authors":"Jerónimo S García,&nbsp;Savíns Puertas-Martín,&nbsp;Juana L Redondo,&nbsp;Juan José Moreno,&nbsp;Pilar M Ortigosa","doi":"10.1007/s11227-022-05014-0","DOIUrl":"https://doi.org/10.1007/s11227-022-05014-0","url":null,"abstract":"<p><p>Compound identification in ligand-based virtual screening is limited by two key issues: the quality and the time needed to obtain predictions. In this sense, we designed OptiPharm, an algorithm that obtained excellent results in improving the sequential methods in the literature. In this work, we go a step further and propose its parallelization. Specifically, we propose a two-layer parallelization. Firstly, an automation of the molecule distribution process between the available nodes in a cluster, and secondly, a parallelization of the internal methods (initialization, reproduction, selection and optimization). This new software, called pOptiPharm, aims to improve the quality of predictions and reduce experimentation time. As the results show, the performance of the proposed methods is good. It can find better solutions than the sequential OptiPharm, all while reducing its computation time almost proportionally to the number of processing units considered.</p>","PeriodicalId":50034,"journal":{"name":"Journal of Supercomputing","volume":"79 9","pages":"9538-9557"},"PeriodicalIF":3.3,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9842220/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9721072","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
On using affine sketches for multiple-response dynamic graph regression 仿射草图在多响应动态图回归中的应用
IF 3.3 3区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2023-01-01 DOI: 10.1007/s11227-022-04865-x
M. H. Chehreghani
{"title":"On using affine sketches for multiple-response dynamic graph regression","authors":"M. H. Chehreghani","doi":"10.1007/s11227-022-04865-x","DOIUrl":"https://doi.org/10.1007/s11227-022-04865-x","url":null,"abstract":"","PeriodicalId":50034,"journal":{"name":"Journal of Supercomputing","volume":"139 1","pages":"5139-5153"},"PeriodicalIF":3.3,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73670183","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Prediction model of sparse autoencoder-based bidirectional LSTM for wastewater flow rate. 基于稀疏自编码器的污水流量双向LSTM预测模型。
IF 3.3 3区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2023-01-01 DOI: 10.1007/s11227-022-04827-3
Jianying Huang, Seunghyeok Yang, Jinhui Li, Jeill Oh, Hoon Kang

Sanitary sewer overflows caused by excessive rainfall derived infiltration and inflow is the major challenge currently faced by municipal administrations, and therefore, the ability to correctly predict the wastewater state of the sanitary sewage system in advance is especially significant. In this paper, we present the design of the Sparse Autoencoder-based Bidirectional long short-term memory (SAE-BLSTM) network model, a model built on Sparse Autoencoder (SAE) and Bidirectional long short-term memory (BLSTM) networks to predict the wastewater flow rate in a sanitary sewer system. This network model consists of a data preprocessing segment, the SAE network segment, and the BLSTM network segment. The SAE is capable of performing data dimensionality reduction on high-dimensional original input feature data from which it can extract sparse potential features from the aforementioned high-dimensional original input feature data. The potential features extracted by the SAE hidden layer are concatenated with the smooth historical wastewater flow rate features to create an augmented previous feature vector that more accurately predicts the wastewater flow rate. These augmented previous features are applied to the BLSTM network to predict the future wastewater flow rate. Thus, this network model combines two kinds of abilities, SAE's low-dimensional nonlinear representation for original input feature data and BLSTM's time series prediction for wastewater flow rate. Then, we conducted extensive experiments on the SAE-BLSTM network model utilizing the real-world hydrological time series datasets and employing advanced SVM, FCN, GRU, LSTM, and BLSTM models as comparison algorithms. The experimental results show that our proposed SAE-BLSTM model consistently outperforms the advanced comparison models. Specifically, we selected a 3 months period training dataset in our dataset to train and test the SAE-BLSTM network model. The SAE-BLSTM network model yielded the lowest RMSE, MAE, and highest R 2, which are 242.55, 179.05, and 0.99626, respectively.

过量降雨引发的入渗和流入导致的生活污水溢流是目前市政管理面临的主要挑战,因此,提前正确预测生活污水系统的废水状态的能力尤为重要。本文提出了基于稀疏自编码器的双向长短期记忆(SAE-BLSTM)网络模型的设计,该模型建立在稀疏自编码器(SAE)和双向长短期记忆(BLSTM)网络的基础上,用于预测下水道系统的污水流量。该网络模型由数据预处理网段、SAE网段和BLSTM网段组成。SAE能够对高维原始输入特征数据进行数据降维,从而从上述高维原始输入特征数据中提取稀疏的潜在特征。SAE隐藏层提取的潜在特征与平滑的历史废水流量特征相连接,以创建一个增强的先前特征向量,更准确地预测废水流量。这些增强的先前特征应用于BLSTM网络来预测未来的废水流量。因此,该网络模型结合了两种能力,即SAE对原始输入特征数据的低维非线性表示和BLSTM对废水流量的时间序列预测。然后,我们利用真实水文时间序列数据集,采用先进的SVM、FCN、GRU、LSTM和BLSTM模型作为比较算法,对SAE-BLSTM网络模型进行了广泛的实验。实验结果表明,我们提出的SAE-BLSTM模型始终优于先进的比较模型。具体来说,我们在我们的数据集中选择了一个3个月周期的训练数据集来训练和测试SAE-BLSTM网络模型。SAE-BLSTM网络模型的RMSE最低,MAE最高,r2最高,分别为242.55、179.05和0.99626。
{"title":"Prediction model of sparse autoencoder-based bidirectional LSTM for wastewater flow rate.","authors":"Jianying Huang,&nbsp;Seunghyeok Yang,&nbsp;Jinhui Li,&nbsp;Jeill Oh,&nbsp;Hoon Kang","doi":"10.1007/s11227-022-04827-3","DOIUrl":"https://doi.org/10.1007/s11227-022-04827-3","url":null,"abstract":"<p><p>Sanitary sewer overflows caused by excessive rainfall derived infiltration and inflow is the major challenge currently faced by municipal administrations, and therefore, the ability to correctly predict the wastewater state of the sanitary sewage system in advance is especially significant. In this paper, we present the design of the Sparse Autoencoder-based Bidirectional long short-term memory (SAE-BLSTM) network model, a model built on Sparse Autoencoder (SAE) and Bidirectional long short-term memory (BLSTM) networks to predict the wastewater flow rate in a sanitary sewer system. This network model consists of a data preprocessing segment, the SAE network segment, and the BLSTM network segment. The SAE is capable of performing data dimensionality reduction on high-dimensional original input feature data from which it can extract sparse potential features from the aforementioned high-dimensional original input feature data. The potential features extracted by the SAE hidden layer are concatenated with the smooth historical wastewater flow rate features to create an augmented previous feature vector that more accurately predicts the wastewater flow rate. These augmented previous features are applied to the BLSTM network to predict the future wastewater flow rate. Thus, this network model combines two kinds of abilities, SAE's low-dimensional nonlinear representation for original input feature data and BLSTM's time series prediction for wastewater flow rate. Then, we conducted extensive experiments on the SAE-BLSTM network model utilizing the real-world hydrological time series datasets and employing advanced SVM, FCN, GRU, LSTM, and BLSTM models as comparison algorithms. The experimental results show that our proposed SAE-BLSTM model consistently outperforms the advanced comparison models. Specifically, we selected a 3 months period training dataset in our dataset to train and test the SAE-BLSTM network model. The SAE-BLSTM network model yielded the lowest RMSE, MAE, and highest <i>R</i> <sup>2</sup>, which are 242.55, 179.05, and 0.99626, respectively.</p>","PeriodicalId":50034,"journal":{"name":"Journal of Supercomputing","volume":"79 4","pages":"4412-4435"},"PeriodicalIF":3.3,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9511464/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10623977","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
A comparative analysis of meta-heuristic optimization algorithms for feature selection on ML-based classification of heart-related diseases. 基于ML的心脏相关疾病分类特征选择的元启发式优化算法的比较分析。
IF 3.3 3区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2023-01-01 Epub Date: 2023-03-03 DOI: 10.1007/s11227-023-05132-3
Şevket Ay, Ekin Ekinci, Zeynep Garip

This study aims to use a machine learning (ML)-based enhanced diagnosis and survival model to predict heart disease and survival in heart failure by combining the cuckoo search (CS), flower pollination algorithm (FPA), whale optimization algorithm (WOA), and Harris hawks optimization (HHO) algorithms, which are meta-heuristic feature selection algorithms. To achieve this, experiments are conducted on the Cleveland heart disease dataset and the heart failure dataset collected from the Faisalabad Institute of Cardiology published at UCI. CS, FPA, WOA, and HHO algorithms for feature selection are applied for different population sizes and are realized based on the best fitness values. For the original dataset of heart disease, the maximum prediction F-score of 88% is obtained using K-nearest neighbour (KNN) when compared to logistic regression (LR), support vector machine (SVM), Gaussian Naive Bayes (GNB), and random forest (RF). With the proposed approach, the heart disease prediction F-score of 99.72% is obtained using KNN for population sizes 60 with FPA by selecting eight features. For the original dataset of heart failure, the maximum prediction F-score of 70% is obtained using LR and RF compared to SVM, GNB, and KNN. With the proposed approach, the heart failure prediction F-score of 97.45% is obtained using KNN for population sizes 10 with HHO by selecting five features. Experimental findings show that the applied meta-heuristic algorithms with ML algorithms significantly improve prediction performances compared to performances obtained from the original datasets. The motivation of this paper is to select the most critical and informative feature subset through meta-heuristic algorithms to improve classification accuracy.

本研究旨在使用基于机器学习(ML)的增强诊断和生存模型,通过结合杜鹃搜索(CS)、花朵授粉算法(FPA)、鲸鱼优化算法(WOA)和哈里斯-霍克斯优化算法(HHO)来预测心脏病和心力衰竭的生存,这是一种元启发式特征选择算法。为了实现这一点,在克利夫兰心脏病数据集和从加州大学学院费萨拉巴德心脏病学研究所收集的心力衰竭数据集上进行了实验。用于特征选择的CS、FPA、WOA和HHO算法被应用于不同的群体大小,并基于最佳适应度值来实现。对于心脏病的原始数据集,与逻辑回归(LR)、支持向量机(SVM)、高斯朴素贝叶斯(GNB)和随机森林(RF)相比,使用K近邻(KNN)获得了88%的最大预测F分。使用所提出的方法,通过选择八个特征,使用KNN对具有FPA的60人口规模获得了99.72%的心脏病预测F分数。对于心力衰竭的原始数据集,与SVM、GNB和KNN相比,使用LR和RF获得了70%的最大预测F分数。使用所提出的方法,通过选择五个特征,使用KNN对患有HHO的10号人群获得了97.45%的心力衰竭预测F分。实验结果表明,与从原始数据集获得的性能相比,将元启发式算法与ML算法相结合显著提高了预测性能。本文的动机是通过元启发式算法选择最关键、信息量最大的特征子集,以提高分类精度。
{"title":"A comparative analysis of meta-heuristic optimization algorithms for feature selection on ML-based classification of heart-related diseases.","authors":"Şevket Ay,&nbsp;Ekin Ekinci,&nbsp;Zeynep Garip","doi":"10.1007/s11227-023-05132-3","DOIUrl":"10.1007/s11227-023-05132-3","url":null,"abstract":"<p><p>This study aims to use a machine learning (ML)-based enhanced diagnosis and survival model to predict heart disease and survival in heart failure by combining the cuckoo search (CS), flower pollination algorithm (FPA), whale optimization algorithm (WOA), and Harris hawks optimization (HHO) algorithms, which are meta-heuristic feature selection algorithms. To achieve this, experiments are conducted on the Cleveland heart disease dataset and the heart failure dataset collected from the Faisalabad Institute of Cardiology published at UCI. CS, FPA, WOA, and HHO algorithms for feature selection are applied for different population sizes and are realized based on the best fitness values. For the original dataset of heart disease, the maximum prediction F-score of 88% is obtained using K-nearest neighbour (KNN) when compared to logistic regression (LR), support vector machine (SVM), Gaussian Naive Bayes (GNB), and random forest (RF). With the proposed approach, the heart disease prediction F-score of 99.72% is obtained using KNN for population sizes 60 with FPA by selecting eight features. For the original dataset of heart failure, the maximum prediction F-score of 70% is obtained using LR and RF compared to SVM, GNB, and KNN. With the proposed approach, the heart failure prediction F-score of 97.45% is obtained using KNN for population sizes 10 with HHO by selecting five features. Experimental findings show that the applied meta-heuristic algorithms with ML algorithms significantly improve prediction performances compared to performances obtained from the original datasets. The motivation of this paper is to select the most critical and informative feature subset through meta-heuristic algorithms to improve classification accuracy.</p>","PeriodicalId":50034,"journal":{"name":"Journal of Supercomputing","volume":"79 11","pages":"11797-11826"},"PeriodicalIF":3.3,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9983547/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10644968","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Application of edge computing combined with deep learning model in the dynamic evolution of network public opinion in emergencies. 边缘计算结合深度学习模型在突发事件网络舆情动态演变中的应用
IF 3.3 3区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2023-01-01 DOI: 10.1007/s11227-022-04733-8
Min Chen, Lili Zhang

The aim is to clarify the evolution mechanism of Network Public Opinion (NPO) in public emergencies. This work makes up for the insufficient semantic understanding in NPO-oriented emotion analysis and tries to maintain social harmony and stability. The combination of the Edge Computing (EC) and Deep Learning (DL) model is applied to the NPO-oriented Emotion Recognition Model (ERM). Firstly, the NPO on public emergencies is introduced. Secondly, three types of NPO emergencies are selected as research cases. An emotional rule system is established based on the One-Class Classification (OCC) model as emotional standards. The word embedding representation method represents the preprocessed Weibo text data. Convolutional Neural Network (CNN) is used as the classifier. The NPO-oriented ERM is implemented on CNN and verified through comparative experiments after the CNN's hyperparameters are adjusted. The research results show that the text annotation of the NPO based on OCC emotion rules can obtain better recognition performance. Additionally, the recognition effect of the improved CNN is significantly higher than the Support Vector Machine (SVM) in traditional Machine Learning (ML). This work realizes the technological innovation of automatic emotion recognition of NPO groups and provides a basis for the relevant government agencies to handle the NPO in public emergencies scientifically.

目的是阐明突发公共事件中网络舆论的演化机制。这一工作弥补了非营利组织情感分析中语义理解的不足,并试图维护社会的和谐与稳定。将边缘计算(EC)和深度学习(DL)模型相结合,应用于面向非营利组织的情绪识别模型(ERM)。首先,对突发公共事件的NPO进行了介绍。其次,选取三种类型的NPO突发事件作为研究案例。以单类分类(OCC)模型为情感标准,建立了情感规则体系。单词嵌入表示方法表示预处理后的微博文本数据。使用卷积神经网络(CNN)作为分类器。在CNN超参数调整后,在CNN上实现了面向npo的ERM,并通过对比实验进行了验证。研究结果表明,基于OCC情感规则的NPO文本标注可以获得较好的识别性能。此外,改进后的CNN的识别效果明显高于传统机器学习(ML)中的支持向量机(SVM)。本工作实现了NPO群体情感自动识别的技术创新,为政府相关部门科学处理突发公共事件中的NPO提供了依据。
{"title":"Application of edge computing combined with deep learning model in the dynamic evolution of network public opinion in emergencies.","authors":"Min Chen,&nbsp;Lili Zhang","doi":"10.1007/s11227-022-04733-8","DOIUrl":"https://doi.org/10.1007/s11227-022-04733-8","url":null,"abstract":"<p><p>The aim is to clarify the evolution mechanism of Network Public Opinion (NPO) in public emergencies. This work makes up for the insufficient semantic understanding in NPO-oriented emotion analysis and tries to maintain social harmony and stability. The combination of the Edge Computing (EC) and Deep Learning (DL) model is applied to the NPO-oriented Emotion Recognition Model (ERM). Firstly, the NPO on public emergencies is introduced. Secondly, three types of NPO emergencies are selected as research cases. An emotional rule system is established based on the One-Class Classification (OCC) model as emotional standards. The word embedding representation method represents the preprocessed Weibo text data. Convolutional Neural Network (CNN) is used as the classifier. The NPO-oriented ERM is implemented on CNN and verified through comparative experiments after the CNN's hyperparameters are adjusted. The research results show that the text annotation of the NPO based on OCC emotion rules can obtain better recognition performance. Additionally, the recognition effect of the improved CNN is significantly higher than the Support Vector Machine (SVM) in traditional Machine Learning (ML). This work realizes the technological innovation of automatic emotion recognition of NPO groups and provides a basis for the relevant government agencies to handle the NPO in public emergencies scientifically.</p>","PeriodicalId":50034,"journal":{"name":"Journal of Supercomputing","volume":"79 2","pages":"1526-1543"},"PeriodicalIF":3.3,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9330939/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10534038","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
A secured internet of robotic things (IoRT) for long-term care services in a smart building. 为智能建筑中的长期护理服务提供安全的机器人物联网(IoRT)。
IF 3.3 3区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2023-01-01 DOI: 10.1007/s11227-022-04845-1
Shih-Hao Chang, Chih-Hsien Hsia, Wei-Zhi Hong

Long-term care refers to any support, both medical and non-medical, provided to the elderly with a chronic illness or disability due to physical or mental conditions. Since the cost of long-term care insurance is not inexpensive, low-cost devices and sensors can be used to create medical assistance systems to reduce human maintenance costs. The requirement of security and privacy under healthcare information protection is a critical issue for internet of medical things (IoMT) data transmission. In this paper, we designed an IoMT security robot for a long-term care system. The goal of this IoMT security robot is to provide secure transmission of the residents' private information. It is composed of three layers, namely, collection, encryption, and transmission. The function of the IoMT security robot is to first collect data from the patient or the elderly, then provide efficient data encryption, and deliver secured data transmission mechanisms to send the valuable data to the cloud. This IoMT security robot also has a server authentication mechanism, and a support IoT and IoMT devices inspection function. Our evaluation results showed that even when we utilized a low power consumption device like Raspberry Pi, AES algorithm achieved an encrypt and decrypt of 100-100 K bytes under 9 ms, which is a lot better than ECC, which takes about 104 ms. Further, we found that the AES only takes 0.00015 s to decrypt 100 Bytes data, which is way faster than the ECC algorithm, which takes 0.09 s.

长期护理是指向患有慢性病或因身体或精神状况而残疾的老年人提供的任何医疗和非医疗支助。由于长期护理保险的成本并不便宜,低成本的设备和传感器可以用来创建医疗援助系统,以减少人力维护成本。医疗信息保护下的安全和隐私要求是医疗物联网数据传输的关键问题。本文设计了一种用于长期护理系统的IoMT安全机器人。这个IoMT安全机器人的目标是为居民的私人信息提供安全传输。它由采集、加密和传输三层组成。IoMT安全机器人的功能是首先收集患者或老年人的数据,然后提供高效的数据加密,并提供安全的数据传输机制,将有价值的数据发送到云端。该IoMT安全机器人还具有服务器认证机制,并具有支持IoT和IoMT设备巡检功能。我们的评估结果表明,即使我们使用像树莓派这样的低功耗设备,AES算法也可以在9 ms内实现100-100 K字节的加密和解密,这比ECC算法要好得多,ECC算法大约需要104 ms。此外,我们发现AES只需要0.00015秒来解密100字节的数据,这比ECC算法要快得多,ECC算法需要0.09秒。
{"title":"A secured internet of robotic things (IoRT) for long-term care services in a smart building.","authors":"Shih-Hao Chang,&nbsp;Chih-Hsien Hsia,&nbsp;Wei-Zhi Hong","doi":"10.1007/s11227-022-04845-1","DOIUrl":"https://doi.org/10.1007/s11227-022-04845-1","url":null,"abstract":"<p><p>Long-term care refers to any support, both medical and non-medical, provided to the elderly with a chronic illness or disability due to physical or mental conditions. Since the cost of long-term care insurance is not inexpensive, low-cost devices and sensors can be used to create medical assistance systems to reduce human maintenance costs. The requirement of security and privacy under healthcare information protection is a critical issue for internet of medical things (IoMT) data transmission. In this paper, we designed an IoMT security robot for a long-term care system. The goal of this IoMT security robot is to provide secure transmission of the residents' private information. It is composed of three layers, namely, collection, encryption, and transmission. The function of the IoMT security robot is to first collect data from the patient or the elderly, then provide efficient data encryption, and deliver secured data transmission mechanisms to send the valuable data to the cloud. This IoMT security robot also has a server authentication mechanism, and a support IoT and IoMT devices inspection function. Our evaluation results showed that even when we utilized a low power consumption device like Raspberry Pi, AES algorithm achieved an encrypt and decrypt of 100-100 K bytes under 9 ms, which is a lot better than ECC, which takes about 104 ms. Further, we found that the AES only takes 0.00015 s to decrypt 100 Bytes data, which is way faster than the ECC algorithm, which takes 0.09 s.</p>","PeriodicalId":50034,"journal":{"name":"Journal of Supercomputing","volume":"79 5","pages":"5276-5290"},"PeriodicalIF":3.3,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9559120/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10761235","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
Journal of Supercomputing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1