首页 > 最新文献

Multimedia Tools and Applications最新文献

英文 中文
An efficient crack detection and leakage monitoring in liquid metal pipelines using a novel BRetN and TCK-LSTM techniques 使用新型 BRetN 和 TCK-LSTM 技术高效检测液态金属管道中的裂缝并进行泄漏监测
IF 3.6 4区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-09-06 DOI: 10.1007/s11042-024-20170-6
Praveen Sankarasubramanian

Nowadays, the pipeline system has the safest, most economical, and most efficient means of transporting petroleum products and other chemical fluids. But, the faults in pipelines cause resource wastage and environmental pollution. Most of the existing works focused either on the surface Crack Detection (CD) or Leakage Detection (LD) of pipes with limited features. Hence, efficient crack detection and leakage monitoring are proposed based on the Acoustic Emission (AE) signal and AE image features using a new Berout Retina Net (BRetN) and Tent Chaotic Kaiming-centric Long Short Term Memory (TCK-LSTM) methodologies. The process initiates from the gathering of input data, followed by preprocessing. Then, the cracks are detected by utilizing Berout Retina Net (BRetN), and the features of AE signals are retrieved. On the other hand, the AE signal is transformed into an AE image using Continuous Wavelet Transform (CWT). Further, the AE image features are extracted, followed by the integration of both the AE signal and AE image features. Further, the optimal features are chosen by using Gorilla Troops Optimizer (GTO). Eventually, the TCK-LSTM model is used for detecting the leakage level of the pipeline. The experimental outcomes illustrated that the proposed framework detected crack and leakage levels with 98.14% accuracy, 95.37% precision, and 98.84% specificity when analogizing over the existing techniques.

如今,管道系统已成为最安全、最经济、最高效的石油产品和其他化学液体运输手段。但是,管道故障会造成资源浪费和环境污染。现有的大多数研究都侧重于管道表面裂缝检测(CD)或泄漏检测(LD),但功能有限。因此,我们提出了基于声发射(AE)信号和 AE 图像特征的高效裂缝检测和泄漏监测方法,并采用了新的 Berout Retina Net(BRetN)和 Tent Chaotic Kaiming-centric Long Short Term Memory(TCK-LSTM)方法。该过程从收集输入数据开始,然后进行预处理。然后,利用 Berout Retina Net(BRetN)检测裂缝,并检索 AE 信号的特征。另一方面,利用连续小波变换 (CWT) 将 AE 信号转换为 AE 图像。然后,提取 AE 图像特征,再对 AE 信号和 AE 图像特征进行整合。然后,使用大猩猩部队优化器(GTO)选择最佳特征。最后,使用 TCK-LSTM 模型检测管道的泄漏程度。实验结果表明,与现有技术相比,拟议框架检测裂缝和泄漏水平的准确率为 98.14%,精确率为 95.37%,特异性为 98.84%。
{"title":"An efficient crack detection and leakage monitoring in liquid metal pipelines using a novel BRetN and TCK-LSTM techniques","authors":"Praveen Sankarasubramanian","doi":"10.1007/s11042-024-20170-6","DOIUrl":"https://doi.org/10.1007/s11042-024-20170-6","url":null,"abstract":"<p>Nowadays, the pipeline system has the safest, most economical, and most efficient means of transporting petroleum products and other chemical fluids. But, the faults in pipelines cause resource wastage and environmental pollution. Most of the existing works focused either on the surface Crack Detection (CD) or Leakage Detection (LD) of pipes with limited features. Hence, efficient crack detection and leakage monitoring are proposed based on the Acoustic Emission (AE) signal and AE image features using a new Berout Retina Net (BRetN) and Tent Chaotic Kaiming-centric Long Short Term Memory (TCK-LSTM) methodologies. The process initiates from the gathering of input data, followed by preprocessing. Then, the cracks are detected by utilizing Berout Retina Net (BRetN), and the features of AE signals are retrieved. On the other hand, the AE signal is transformed into an AE image using Continuous Wavelet Transform (CWT). Further, the AE image features are extracted, followed by the integration of both the AE signal and AE image features. Further, the optimal features are chosen by using Gorilla Troops Optimizer (GTO). Eventually, the TCK-LSTM model is used for detecting the leakage level of the pipeline. The experimental outcomes illustrated that the proposed framework detected crack and leakage levels with 98.14% accuracy, 95.37% precision, and 98.84% specificity when analogizing over the existing techniques.</p>","PeriodicalId":18770,"journal":{"name":"Multimedia Tools and Applications","volume":"4 1","pages":""},"PeriodicalIF":3.6,"publicationDate":"2024-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142203556","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deception detection with multi-scale feature and multi-head attention in videos 利用视频中的多尺度特征和多头注意力进行欺骗检测
IF 3.6 4区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-09-06 DOI: 10.1007/s11042-024-20124-y
Shusen Yuan, Guanqun Zhou, Hongbo Xing, Youjun Jiang, Yewen Cao, Mingqiang Yang

Detecting deception in videos has been a challenging task, especially in real world situations. In this study, we extracted the facial action units from the micro-expression, and then calculated the frequency and the number of occurrences of each action unit. To get more information on different scales, we proposed a combination scheme of Multi-Scale Feature (MSF) model and Multi-Head Attention (MHA). The MSF model consists of two CNN with different convolution kernels and GELU is used as the active function. The MHA model was designed to divide the input features into different subspaces and generate attention for each subspace to make the features more effective. We evaluated our proposed method on the Real-life Trial dataset and achieved an accuracy of 87.81%. The results show that the MSF and MHA model could increase the accuracy of deception detection task. And the comparative experiment demonstrates the effectiveness of our proposed method.

检测视频中的欺骗行为一直是一项具有挑战性的任务,尤其是在现实世界中。在本研究中,我们从微表情中提取面部动作单元,然后计算每个动作单元的频率和出现次数。为了获取更多不同尺度的信息,我们提出了多尺度特征(MSF)模型和多头注意力(MHA)的组合方案。MSF 模型由两个具有不同卷积核的 CNN 组成,并使用 GELU 作为主动函数。MHA 模型的设计目的是将输入特征分为不同的子空间,并对每个子空间产生注意力,使特征更加有效。我们在真实试验数据集上对所提出的方法进行了评估,准确率达到了 87.81%。结果表明,MSF 和 MHA 模型可以提高欺骗检测任务的准确率。对比实验证明了我们提出的方法的有效性。
{"title":"Deception detection with multi-scale feature and multi-head attention in videos","authors":"Shusen Yuan, Guanqun Zhou, Hongbo Xing, Youjun Jiang, Yewen Cao, Mingqiang Yang","doi":"10.1007/s11042-024-20124-y","DOIUrl":"https://doi.org/10.1007/s11042-024-20124-y","url":null,"abstract":"<p>Detecting deception in videos has been a challenging task, especially in real world situations. In this study, we extracted the facial action units from the micro-expression, and then calculated the frequency and the number of occurrences of each action unit. To get more information on different scales, we proposed a combination scheme of Multi-Scale Feature (MSF) model and Multi-Head Attention (MHA). The MSF model consists of two CNN with different convolution kernels and GELU is used as the active function. The MHA model was designed to divide the input features into different subspaces and generate attention for each subspace to make the features more effective. We evaluated our proposed method on the Real-life Trial dataset and achieved an accuracy of 87.81%. The results show that the MSF and MHA model could increase the accuracy of deception detection task. And the comparative experiment demonstrates the effectiveness of our proposed method.</p>","PeriodicalId":18770,"journal":{"name":"Multimedia Tools and Applications","volume":"24 1","pages":""},"PeriodicalIF":3.6,"publicationDate":"2024-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142203554","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AdaptiveGPT: Towards Intelligent Adaptive Learning AdaptiveGPT:实现智能自适应学习
IF 3.6 4区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-09-05 DOI: 10.1007/s11042-024-20144-8
Andréia dos Santos Sachete, Alba Valéria de Sant’anna de Freitas Loiola, Raquel Salcedo Gomes

Adaptive learning is an educational methodology that allows the personalization of learning according to the student’s pedagogical path. In digital environments, the strategic use of technologies enhances adaptive learning initiatives, enabling a dynamic understanding of intricate contextual nuances and the ability to identify and recommend appropriate learning activities. Therefore, this work proposes developing and evaluating a prototype that uses a large language model to create adaptive educational activities in face-to-face and virtual environments automatically. The applied methodology involves the implementation of a large language model with advanced cognitive capabilities to generate learning activities that adapt to individual needs. A proof of concept was developed to evaluate the practicality and usability of this approach. The research results indicate that the approach is practical and adaptable to different educational contexts, reinforcing the synergy between adaptive learning, artificial intelligence, and learning environments. The proof of concept evaluation showed that the prototype is highly usable, validating the proposal as an innovative solution to the growing needs of modern education.

自适应学习是一种教育方法,可以根据学生的教学路径进行个性化学习。在数字环境中,对技术的战略性使用可以增强自适应学习的主动性,从而动态地理解错综复杂的环境细微差别,并能够识别和推荐适当的学习活动。因此,这项工作建议开发和评估一个原型,利用大型语言模型自动创建面对面和虚拟环境中的自适应教育活动。应用的方法包括实施一个具有高级认知能力的大型语言模型,以生成适应个人需求的学习活动。为评估这种方法的实用性和可用性,开发了一个概念验证。研究结果表明,这种方法是实用的,可以适应不同的教育环境,加强了自适应学习、人工智能和学习环境之间的协同作用。概念验证评估表明,原型具有很高的可用性,验证了该建议是满足现代教育日益增长的需求的创新解决方案。
{"title":"AdaptiveGPT: Towards Intelligent Adaptive Learning","authors":"Andréia dos Santos Sachete, Alba Valéria de Sant’anna de Freitas Loiola, Raquel Salcedo Gomes","doi":"10.1007/s11042-024-20144-8","DOIUrl":"https://doi.org/10.1007/s11042-024-20144-8","url":null,"abstract":"<p>Adaptive learning is an educational methodology that allows the personalization of learning according to the student’s pedagogical path. In digital environments, the strategic use of technologies enhances adaptive learning initiatives, enabling a dynamic understanding of intricate contextual nuances and the ability to identify and recommend appropriate learning activities. Therefore, this work proposes developing and evaluating a prototype that uses a large language model to create adaptive educational activities in face-to-face and virtual environments automatically. The applied methodology involves the implementation of a large language model with advanced cognitive capabilities to generate learning activities that adapt to individual needs. A proof of concept was developed to evaluate the practicality and usability of this approach. The research results indicate that the approach is practical and adaptable to different educational contexts, reinforcing the synergy between adaptive learning, artificial intelligence, and learning environments. The proof of concept evaluation showed that the prototype is highly usable, validating the proposal as an innovative solution to the growing needs of modern education.</p>","PeriodicalId":18770,"journal":{"name":"Multimedia Tools and Applications","volume":"7 1","pages":""},"PeriodicalIF":3.6,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142203649","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Transfer learning for human gait recognition using VGG19: CASIA-A dataset 利用 VGG19 进行人体步态识别的迁移学习:CASIA-A 数据集
IF 3.6 4区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-09-05 DOI: 10.1007/s11042-024-20132-y
Veenu Rani, Munish Kumar

Identification of individuals based on physical characteristics has recently gained popularity and falls under the category of pattern recognition. Biometric recognition has emerged as an effective strategy for preventing security breaches, as no two people share the same physical characteristics. "Gait recognition" specifically refers to identifying individuals based on their walking patterns. Human gait is a method of locomotion that relies on the coordination of the brain, nerves, and muscles. Traditionally, human gait analysis was performed subjectively through visual observations. However, with advancements in technology and deep learning, human gait analysis can now be conducted empirically and without the need for subject cooperation, enhancing the quality of life. Deep learning methods have demonstrated excellent performance in human gait recognition. In this article, the authors employed the VGG19 transfer learning model for human gait recognition. They used the public benchmark dataset CASIA-A for their experimental work, which contains a total of 19,139 images captured from 20 individuals. The dataset was segmented into two different patterns: 70:30 and 80:20. To optimize the performance of the proposed model, the authors considered three hyperparameters: loss, validation loss (val_loss), and accuracy rate. They reported accuracy rates of 96.9% and 97.8%, with losses of 2.71% and 2.01% for the two patterns, respectively.

基于物理特征的个人身份识别最近越来越受欢迎,属于模式识别的范畴。由于没有两个人具有相同的身体特征,生物识别已成为防止安全漏洞的有效策略。"步态识别 "特指根据行走模式来识别个人。人类步态是一种依靠大脑、神经和肌肉协调的运动方式。传统上,人类步态分析是通过视觉观察主观进行的。然而,随着技术和深度学习的进步,人类步态分析现在可以通过经验来进行,而且不需要受试者的配合,从而提高了生活质量。深度学习方法在人类步态识别方面表现出色。在本文中,作者采用了 VGG19 转移学习模型进行人类步态识别。他们在实验工作中使用了公共基准数据集 CASIA-A,该数据集包含从 20 个人身上捕获的共计 19,139 张图像。该数据集被分割成两种不同的模式:70:30 和 80:20。为了优化所提模型的性能,作者考虑了三个超参数:损失、验证损失(val_loss)和准确率。他们报告说,两种模式的准确率分别为 96.9% 和 97.8%,损失分别为 2.71% 和 2.01%。
{"title":"Transfer learning for human gait recognition using VGG19: CASIA-A dataset","authors":"Veenu Rani, Munish Kumar","doi":"10.1007/s11042-024-20132-y","DOIUrl":"https://doi.org/10.1007/s11042-024-20132-y","url":null,"abstract":"<p>Identification of individuals based on physical characteristics has recently gained popularity and falls under the category of pattern recognition. Biometric recognition has emerged as an effective strategy for preventing security breaches, as no two people share the same physical characteristics. \"Gait recognition\" specifically refers to identifying individuals based on their walking patterns. Human gait is a method of locomotion that relies on the coordination of the brain, nerves, and muscles. Traditionally, human gait analysis was performed subjectively through visual observations. However, with advancements in technology and deep learning, human gait analysis can now be conducted empirically and without the need for subject cooperation, enhancing the quality of life. Deep learning methods have demonstrated excellent performance in human gait recognition. In this article, the authors employed the VGG19 transfer learning model for human gait recognition. They used the public benchmark dataset CASIA-A for their experimental work, which contains a total of 19,139 images captured from 20 individuals. The dataset was segmented into two different patterns: 70:30 and 80:20. To optimize the performance of the proposed model, the authors considered three hyperparameters: loss, validation loss (val_loss), and accuracy rate. They reported accuracy rates of 96.9% and 97.8%, with losses of 2.71% and 2.01% for the two patterns, respectively.</p>","PeriodicalId":18770,"journal":{"name":"Multimedia Tools and Applications","volume":"2 1","pages":""},"PeriodicalIF":3.6,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142203560","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A geometric-approach based Combinatorial Transformative Scalogram analysis for multiclass identification of pathologies in a voice signal 基于几何方法的组合变换 Scalogram 分析法,用于对语音信号中的病变进行多类识别
IF 3.6 4区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-09-05 DOI: 10.1007/s11042-024-20067-4
Ranita Khumukcham, Kishorjit Nongmeikapam

Many researchers have preferred non-invasive techniques for recognizing the exact type of physiological abnormality in the vocal tract by training machine learning algorithms with feature descriptors extracted from the voice signal. However, until now, most techniques have been limited to classifying whether a voice is normal or abnormal. It is crucial that the trained Artificial Intelligence (AI) be able to identify the exact pathology associated with voice for implementation in a realistic environment. Another issue is the need to suppress the ambient noise that could be mixed up with the spectra of the voice. Current work proposes a robust, less time-consuming and non-invasive technique for the identification of pathology associated with a laryngeal voice signal. More specifically, a two-stage signal filtering approach that encompasses a score-based geometric approach and a glottal inverse filtering method is applied to the input voice signal. The aim here is to estimate the noise spectra, to regenerate a clean signal and finally to deliver a completely fundamental glottal flow-derived signal. For the next stage, clean glottal derivative signals are used in the formation of a novel fused-scalogram which is currently referred to as the "Combinatorial Transformative Scalogram (CTS)." The CTS is a time-frequency domain plot which is a combination of two time-frequency scalograms. There is a thorough investigation of the performance of the two individual scalograms as well as that of the CTS database.Nine classification metrics are used to investigate performance, which are: sensitivity, mean accuracy, error, precision, false positive rate, specificity, Cohen’s kappa, Matthews Correlation Coefficient, and F1 score. Implementation of the VOice ICar fEDerico II (VOICED) standard database provided the highest mean accuracy of 94.12(%) with a sensitivity of 93.85(%) and a specificity of 97.96(%) against other existing techniques. The current method performed well despite the data imbalance that exists between classes.

许多研究人员倾向于采用非侵入式技术,通过从语音信号中提取特征描述符来训练机器学习算法,从而准确识别声道生理异常的类型。然而,迄今为止,大多数技术仅限于对声音正常或异常进行分类。至关重要的是,经过训练的人工智能(AI)必须能够识别与嗓音相关的确切病理,以便在现实环境中实施。另一个问题是需要抑制可能与语音频谱混杂在一起的环境噪音。目前的工作提出了一种稳健、耗时较少且非侵入性的技术,用于识别与喉部声音信号相关的病理。更具体地说,对输入的语音信号采用了两阶段信号滤波方法,包括基于评分的几何方法和声门反滤波方法。其目的是估计噪声频谱,重新生成干净的信号,最后提供完全基本的声门流量衍生信号。在下一阶段,干净的声门导数信号被用于形成一种新的融合声谱图,这种声谱图目前被称为 "组合变换声谱图(CTS)"。CTS 是一个时频域图,由两个时频频谱图组合而成。在研究性能时使用了九个分类指标,分别是:灵敏度、平均准确度、误差、精确度、假阳性率、特异性、科恩卡帕(Cohen's kappa)、马修斯相关系数(Matthews Correlation Coefficient)和 F1 分数。与其他现有技术相比,使用 VOice ICar fEDerico II (VOICED) 标准数据库的平均准确率最高,为 94.12%,灵敏度为 93.85%,特异性为 97.96%。尽管类与类之间存在数据不平衡,但目前的方法表现良好。
{"title":"A geometric-approach based Combinatorial Transformative Scalogram analysis for multiclass identification of pathologies in a voice signal","authors":"Ranita Khumukcham, Kishorjit Nongmeikapam","doi":"10.1007/s11042-024-20067-4","DOIUrl":"https://doi.org/10.1007/s11042-024-20067-4","url":null,"abstract":"<p>Many researchers have preferred non-invasive techniques for recognizing the exact type of physiological abnormality in the vocal tract by training machine learning algorithms with feature descriptors extracted from the voice signal. However, until now, most techniques have been limited to classifying whether a voice is normal or abnormal. It is crucial that the trained Artificial Intelligence (AI) be able to identify the exact pathology associated with voice for implementation in a realistic environment. Another issue is the need to suppress the ambient noise that could be mixed up with the spectra of the voice. Current work proposes a robust, less time-consuming and non-invasive technique for the identification of pathology associated with a laryngeal voice signal. More specifically, a two-stage signal filtering approach that encompasses a score-based geometric approach and a glottal inverse filtering method is applied to the input voice signal. The aim here is to estimate the noise spectra, to regenerate a clean signal and finally to deliver a completely fundamental glottal flow-derived signal. For the next stage, clean glottal derivative signals are used in the formation of a novel fused-scalogram which is currently referred to as the \"Combinatorial Transformative Scalogram (CTS).\" The CTS is a time-frequency domain plot which is a combination of two time-frequency scalograms. There is a thorough investigation of the performance of the two individual scalograms as well as that of the CTS database.Nine classification metrics are used to investigate performance, which are: sensitivity, mean accuracy, error, precision, false positive rate, specificity, Cohen’s kappa, Matthews Correlation Coefficient, and F1 score. Implementation of the VOice ICar fEDerico II (VOICED) standard database provided the highest mean accuracy of 94.12<span>(%)</span> with a sensitivity of 93.85<span>(%)</span> and a specificity of 97.96<span>(%)</span> against other existing techniques. The current method performed well despite the data imbalance that exists between classes.</p>","PeriodicalId":18770,"journal":{"name":"Multimedia Tools and Applications","volume":"13 1","pages":""},"PeriodicalIF":3.6,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142203579","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Insights into research on blockchain for smart contracts: a bibliometric analysis 区块链智能合约研究的启示:文献计量分析
IF 3.6 4区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-09-05 DOI: 10.1007/s11042-024-20164-4
Renu Singh, Ashlesha Gupta, Poonam Mittal

Over the past few years, blockchain technology has gained significant attention. This surge in popularity can be attributed to the emergence of cryptocurrencies and the development of smart contracts. Cryptocurrency is a digital currency that eliminates the problem of double spending. Cryptocurrencies like Bitcoin, Ethereum, Litecoin, Stellar, Zcash, Maker, Aave, etc. become popular and are preferred for money transfers. Smart contracts are the next popular technology on the blockchain after cryptocurrency. It can be considered a piece of code that can execute automatically when the predefined conditions are fulfilled. Researchers believe that the potential of blockchain with smart contracts is only in its initial stages and that its true potential has yet to be fully discovered. Hence, an extensive bibliometric analysis is conducted to understand blockchain trends for smart contracts and to give future directions in this field. For this analysis, various steps are followed, starting with formulating the research question, defining the scope of our research, extracting and analyzing data, answering the research question, and finally, drawing a conclusion. This research paper will be fruitful for scholars and researchers, providing an extensive statistical and network analysis of extracted smart contracts publications.

在过去几年里,区块链技术获得了极大的关注。这种受欢迎程度的激增可归因于加密货币的出现和智能合约的发展。加密货币是一种数字货币,可以消除重复消费的问题。比特币、以太坊、莱特币、恒星币、Zcash、Maker、Aave 等加密货币广受欢迎,成为转账的首选。智能合约是继加密货币之后又一项流行的区块链技术。它可以被视为一段代码,在满足预定义条件时可以自动执行。研究人员认为,区块链与智能合约的潜力仅处于初始阶段,其真正潜力还有待充分发掘。因此,我们进行了广泛的文献计量分析,以了解智能合约的区块链趋势,并给出该领域的未来发展方向。在分析过程中,我们遵循了多个步骤,首先是提出研究问题,界定研究范围,提取和分析数据,回答研究问题,最后得出结论。这篇研究论文将为学者和研究人员提供广泛的统计和网络分析,对提取的智能合约出版物有所帮助。
{"title":"Insights into research on blockchain for smart contracts: a bibliometric analysis","authors":"Renu Singh, Ashlesha Gupta, Poonam Mittal","doi":"10.1007/s11042-024-20164-4","DOIUrl":"https://doi.org/10.1007/s11042-024-20164-4","url":null,"abstract":"<p>Over the past few years, blockchain technology has gained significant attention. This surge in popularity can be attributed to the emergence of cryptocurrencies and the development of smart contracts. Cryptocurrency is a digital currency that eliminates the problem of double spending. Cryptocurrencies like Bitcoin, Ethereum, Litecoin, Stellar, Zcash, Maker, Aave, etc. become popular and are preferred for money transfers. Smart contracts are the next popular technology on the blockchain after cryptocurrency. It can be considered a piece of code that can execute automatically when the predefined conditions are fulfilled. Researchers believe that the potential of blockchain with smart contracts is only in its initial stages and that its true potential has yet to be fully discovered. Hence, an extensive bibliometric analysis is conducted to understand blockchain trends for smart contracts and to give future directions in this field. For this analysis, various steps are followed, starting with formulating the research question, defining the scope of our research, extracting and analyzing data, answering the research question, and finally, drawing a conclusion. This research paper will be fruitful for scholars and researchers, providing an extensive statistical and network analysis of extracted smart contracts publications.</p>","PeriodicalId":18770,"journal":{"name":"Multimedia Tools and Applications","volume":"6 1","pages":""},"PeriodicalIF":3.6,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142203629","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SUGrasping: a semantic grasping framework based on multi-head 3D U-Net SUGrasping:基于多头 3D U-Net 的语义抓取框架
IF 3.6 4区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-09-05 DOI: 10.1007/s11042-024-20037-w
He Cao, Yunzhou Zhang, Zhexue Ge, Xin Chen, Xiaozheng Liu, Jiaqi Zhao

Object grasping is an important skill for robots to interact with the real world, especially in unstructured environments where occlusions and different shapes of target objects are present. In this work, we introduce a robot grasping pipeline called SUGrasping, which can obtain the grasping poses more precisely for target objects. The grasping pipeline treats the Truncated Signed Distance Function (TSDF) and point clouds of the grasping scene as input simultaneously. The proposed multi-head 3D U-Net accepts reconstructed TSDF representation and outputs the grasping configurations, including predicted grasp quality, orientation and width of the gripper. The point cloud is fed into PointNet to obtain the semantic segmentation results for all objects in the grasping workspace. With the help of point cloud inside the gripper, the relationship between the gripper and semantic information can be established. It makes robots know which object they are grasping, rather than just removing objects in the workspace like previous works. Experimental results show that the proposed method has an improvement in grasping success rate and percent cleared of target objects, which outperforms state-of-the-art methods compared in this paper.

物体抓取是机器人与现实世界交互的一项重要技能,尤其是在目标物体存在遮挡物和不同形状的非结构化环境中。在这项工作中,我们介绍了一种名为 SUGrasping 的机器人抓取管道,它可以更精确地获取目标物体的抓取姿势。该抓取流水线同时将截断符号距离函数(TSDF)和抓取场景的点云作为输入。建议的多头 3D U-Net 接受重建的 TSDF 表示并输出抓取配置,包括预测的抓取质量、抓手的方向和宽度。点云被输入 PointNet,以获得抓取工作区中所有物体的语义分割结果。在抓手内部点云的帮助下,可以建立抓手与语义信息之间的关系。它能让机器人知道自己正在抓取哪个物体,而不是像以前的作品那样,只是删除工作区中的物体。实验结果表明,所提出的方法提高了抓取成功率和目标物体的清除率,优于本文所比较的最先进方法。
{"title":"SUGrasping: a semantic grasping framework based on multi-head 3D U-Net","authors":"He Cao, Yunzhou Zhang, Zhexue Ge, Xin Chen, Xiaozheng Liu, Jiaqi Zhao","doi":"10.1007/s11042-024-20037-w","DOIUrl":"https://doi.org/10.1007/s11042-024-20037-w","url":null,"abstract":"<p>Object grasping is an important skill for robots to interact with the real world, especially in unstructured environments where occlusions and different shapes of target objects are present. In this work, we introduce a robot grasping pipeline called SUGrasping, which can obtain the grasping poses more precisely for target objects. The grasping pipeline treats the Truncated Signed Distance Function (TSDF) and point clouds of the grasping scene as input simultaneously. The proposed multi-head 3D U-Net accepts reconstructed TSDF representation and outputs the grasping configurations, including predicted grasp quality, orientation and width of the gripper. The point cloud is fed into PointNet to obtain the semantic segmentation results for all objects in the grasping workspace. With the help of point cloud inside the gripper, the relationship between the gripper and semantic information can be established. It makes robots know which object they are grasping, rather than just removing objects in the workspace like previous works. Experimental results show that the proposed method has an improvement in grasping success rate and percent cleared of target objects, which outperforms state-of-the-art methods compared in this paper.</p>","PeriodicalId":18770,"journal":{"name":"Multimedia Tools and Applications","volume":"13 1","pages":""},"PeriodicalIF":3.6,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142203559","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Integrating cycleGAN and BERT for Chinese text style transfer 将 cycleGAN 和 BERT 集成到中文文本样式转换中
IF 3.6 4区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-09-04 DOI: 10.1007/s11042-024-20131-z
Chien-Hsing Chou, Cheng-Hou Chou, Yi-Zeng Hsieh, Tzu-Shien Yang

In this study, we integrate the Bidirectional Encoder Representations from Transformers (BERT) model with the Cycle Generative Adversarial Network (CycleGAN) to create a system for Chinese text style transfer. Natural language processing (NLP) involves converting human languages into data interpretable by computers, enabling applications like text classification, chatbots, and dialogue systems. Recent advancements, such as Google's transformer model and the BERT technique, have significantly improved NLP capabilities through self-attention mechanisms and unsupervised pretraining. Text style transfer modifies the style of texts without altering their semantics. Previous methods like StyIns and models based on disentangled representation learning highlight the challenges of retaining text meaning during style transfer. Our system leverages CycleGAN’s unsupervised learning to convert unpaired data between wuxia and fantasy styles while preserving semantics. Using the pretrained BERT model from the Chinese Knowledge and Information Processing (CKIP) Lab, our experimental results demonstrate successful style conversion, maintaining the original meanings of texts. This integration of BERT and CycleGAN shows promise for further advancements in NLP applications.

在本研究中,我们将双向编码器变换器表征(BERT)模型与循环生成对抗网络(CycleGAN)相结合,创建了一个中文文本风格转换系统。自然语言处理(NLP)涉及将人类语言转换为计算机可解释的数据,从而实现文本分类、聊天机器人和对话系统等应用。最近的进步,如谷歌的转换器模型和 BERT 技术,通过自我关注机制和无监督预训练,大大提高了 NLP 的能力。文本风格转换可在不改变文本语义的情况下修改文本风格。以前的方法,如 StyIns 和基于分离表征学习的模型,都强调了在风格转换过程中保留文本意义的挑战。我们的系统利用 CycleGAN 的无监督学习功能,在保留语义的前提下在武侠和玄幻风格之间转换未配对的数据。利用中文知识与信息处理(CKIP)实验室预训练的 BERT 模型,我们的实验结果表明文体转换非常成功,保留了文本的原意。BERT 与 CycleGAN 的整合为进一步推进 NLP 应用带来了希望。
{"title":"Integrating cycleGAN and BERT for Chinese text style transfer","authors":"Chien-Hsing Chou, Cheng-Hou Chou, Yi-Zeng Hsieh, Tzu-Shien Yang","doi":"10.1007/s11042-024-20131-z","DOIUrl":"https://doi.org/10.1007/s11042-024-20131-z","url":null,"abstract":"<p>In this study, we integrate the Bidirectional Encoder Representations from Transformers (BERT) model with the Cycle Generative Adversarial Network (CycleGAN) to create a system for Chinese text style transfer. Natural language processing (NLP) involves converting human languages into data interpretable by computers, enabling applications like text classification, chatbots, and dialogue systems. Recent advancements, such as Google's transformer model and the BERT technique, have significantly improved NLP capabilities through self-attention mechanisms and unsupervised pretraining. Text style transfer modifies the style of texts without altering their semantics. Previous methods like StyIns and models based on disentangled representation learning highlight the challenges of retaining text meaning during style transfer. Our system leverages CycleGAN’s unsupervised learning to convert unpaired data between wuxia and fantasy styles while preserving semantics. Using the pretrained BERT model from the Chinese Knowledge and Information Processing (CKIP) Lab, our experimental results demonstrate successful style conversion, maintaining the original meanings of texts. This integration of BERT and CycleGAN shows promise for further advancements in NLP applications.</p>","PeriodicalId":18770,"journal":{"name":"Multimedia Tools and Applications","volume":"2 1","pages":""},"PeriodicalIF":3.6,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142203628","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing spoken dialect identification with stacked generalization of deep learning models 利用深度学习模型的堆叠泛化增强方言口语识别能力
IF 3.6 4区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-09-04 DOI: 10.1007/s11042-024-20143-9
Khaled Lounnas, Mohamed Lichouri, Mourad Abbas

As dialects are widely used in many countries, there is growing interest in incorporating them into various applications, including conversational systems. Processing spoken dialects is an important module in such systems, yet it remains a challenging task due to the lack of resources and the inherent ambiguity and complexity of dialects. This paper presents a comparison of two approaches for identifying spoken Maghrebi dialects, tested on an in-house corpus composed of four dialects: Algerian Arabic Dialect (AAD), Algerian Berber Dialect (ABD), Moroccan Arabic Dialect (MAD), and Moroccan Berber Dialect (MBD), as well as two variants of Modern Standard Arabic (MSA): MSA_ALG and MSA_MAR. The first method uses a fully connected neural network (NN2) to retrain several Transfer Learning (TL) models with varying layer numbers, including Residual Networks (ResNet50, ResNet101), Visual Geometric Group networks (VGG16, VGG19), Dense Convolutional Networks (DenseNet121, DenseNet169), and Efficient Convolutional Neural Networks for Mobile Vision Applications (MobileNet, MobileNetV2). These models were chosen based on their proven ability to capture different levels of feature abstraction: deeper models like ResNet and DenseNet are capable of capturing more complex and nuanced patterns, which is critical for distinguishing subtle differences in dialects, while VGG and MobileNet models offer computational efficiency, making them suitable for applications with limited resources. The second approach employs a “stacked generalization” strategy, which merges predictions from the previously trained models to enhance the final classification performance. Our results show that this cascade strategy improves the overall performance of the Language/Dialect Identification system, with an accuracy increase of up to 5% for specific dialect pairs. Notably, the best performance was achieved with DenseNet and ResNet models, reaching an accuracy of 99.11% for distinguishing between Algerian Berber Dialect and Moroccan Berber Dialect. These findings indicate that despite the limited size of the employed dataset, the cascade strategy and the selection of robust TL models significantly enhance the system’s performance in dialect identification. By leveraging the unique strengths of each model, our approach demonstrates a robust and efficient solution to the challenge of spoken dialect processing.

由于方言在许多国家被广泛使用,将方言纳入各种应用(包括对话系统)的兴趣与日俱增。处理方言口语是此类系统中的一个重要模块,但由于缺乏资源以及方言固有的模糊性和复杂性,处理方言口语仍然是一项具有挑战性的任务。本文比较了识别马格里布方言口语的两种方法,并在由四种方言组成的内部语料库上进行了测试:阿尔及利亚阿拉伯语方言 (AAD)、阿尔及利亚柏柏尔方言 (ABD)、摩洛哥阿拉伯语方言 (MAD) 和摩洛哥柏柏尔方言 (MBD),以及现代标准阿拉伯语 (MSA) 的两种变体:MSA_ALG 和 MSA_MAR。第一种方法使用全连接神经网络 (NN2) 来重新训练不同层数的多个迁移学习 (TL) 模型,包括残差网络 (ResNet50, ResNet101)、视觉几何组网络 (VGG16, VGG19)、密集卷积网络 (DenseNet121, DenseNet169) 和用于移动视觉应用的高效卷积神经网络 (MobileNet, MobileNetV2)。选择这些模型是基于它们捕捉不同层次特征抽象的能力:ResNet 和 DenseNet 等深度模型能够捕捉更复杂、更细微的模式,这对于区分方言中的细微差别至关重要;而 VGG 和 MobileNet 模型具有计算效率高的特点,适合资源有限的应用。第二种方法采用了 "堆叠泛化 "策略,即合并之前训练过的模型的预测结果,以提高最终的分类性能。我们的结果表明,这种级联策略提高了语言/方言识别系统的整体性能,对于特定的方言对,准确率可提高 5%。值得注意的是,DenseNet 和 ResNet 模型的性能最佳,在区分阿尔及利亚柏柏尔方言和摩洛哥柏柏尔方言时,准确率达到 99.11%。这些研究结果表明,尽管采用的数据集规模有限,但级联策略和鲁棒性 TL 模型的选择大大提高了系统在方言识别方面的性能。通过利用每个模型的独特优势,我们的方法为解决方言口语处理难题提供了稳健高效的解决方案。
{"title":"Enhancing spoken dialect identification with stacked generalization of deep learning models","authors":"Khaled Lounnas, Mohamed Lichouri, Mourad Abbas","doi":"10.1007/s11042-024-20143-9","DOIUrl":"https://doi.org/10.1007/s11042-024-20143-9","url":null,"abstract":"<p>As dialects are widely used in many countries, there is growing interest in incorporating them into various applications, including conversational systems. Processing spoken dialects is an important module in such systems, yet it remains a challenging task due to the lack of resources and the inherent ambiguity and complexity of dialects. This paper presents a comparison of two approaches for identifying spoken Maghrebi dialects, tested on an in-house corpus composed of four dialects: Algerian Arabic Dialect (AAD), Algerian Berber Dialect (ABD), Moroccan Arabic Dialect (MAD), and Moroccan Berber Dialect (MBD), as well as two variants of Modern Standard Arabic (MSA): MSA_ALG and MSA_MAR. The first method uses a fully connected neural network (NN2) to retrain several Transfer Learning (TL) models with varying layer numbers, including Residual Networks (ResNet50, ResNet101), Visual Geometric Group networks (VGG16, VGG19), Dense Convolutional Networks (DenseNet121, DenseNet169), and Efficient Convolutional Neural Networks for Mobile Vision Applications (MobileNet, MobileNetV2). These models were chosen based on their proven ability to capture different levels of feature abstraction: deeper models like ResNet and DenseNet are capable of capturing more complex and nuanced patterns, which is critical for distinguishing subtle differences in dialects, while VGG and MobileNet models offer computational efficiency, making them suitable for applications with limited resources. The second approach employs a “stacked generalization” strategy, which merges predictions from the previously trained models to enhance the final classification performance. Our results show that this cascade strategy improves the overall performance of the Language/Dialect Identification system, with an accuracy increase of up to 5% for specific dialect pairs. Notably, the best performance was achieved with DenseNet and ResNet models, reaching an accuracy of 99.11% for distinguishing between Algerian Berber Dialect and Moroccan Berber Dialect. These findings indicate that despite the limited size of the employed dataset, the cascade strategy and the selection of robust TL models significantly enhance the system’s performance in dialect identification. By leveraging the unique strengths of each model, our approach demonstrates a robust and efficient solution to the challenge of spoken dialect processing.</p>","PeriodicalId":18770,"journal":{"name":"Multimedia Tools and Applications","volume":"35 1","pages":""},"PeriodicalIF":3.6,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142203574","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unsupervised dual-teacher knowledge distillation for pseudo-label refinement in domain adaptive person re-identification 无监督双师知识提炼,用于领域自适应人员再识别中的伪标签提炼
IF 3.6 4区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-09-04 DOI: 10.1007/s11042-024-20147-5
Sidharth Samanta, Debasish Jena, Suvendu Rup

Unsupervised Domain Adaptation (UDA) in person re-identification (reID) addresses the challenge of adapting models trained on labeled source domains to unlabeled target domains, which is crucial for real-world applications. A significant problem in clustering-based UDA methods is the noise in pseudo-labels generated due to inter-domain disparities, which can degrade the performance of reID models. To address this issue, we propose the Unsupervised Dual-Teacher Knowledge Distillation (UDKD), an efficient learning scheme designed to enhance robustness against noisy pseudo-labels in UDA for person reID. The proposed UDKD method combines the outputs of two source-trained classifiers (teachers) to train a third classifier (student) using a modified soft-triplet loss-based metric learning approach. Additionally, a weighted averaging technique is employed to rectify the noise in the predicted labels generated from the teacher networks. Experimental results demonstrate that the proposed UDKD significantly improves performance in terms of mean Average Precision (mAP) and Cumulative Match Characteristic curve (Rank 1, 5, and 10). Specifically, UDKD achieves an mAP of 84.57 and 73.32, and Rank 1 scores of 94.34 and 88.26 for Duke to Market and Market to Duke scenarios, respectively. These results surpass the state-of-the-art performance, underscoring the efficacy of UDKD in advancing UDA techniques for person reID and highlighting its potential to enhance performance and robustness in real-world applications.

人物再识别(reID)中的无监督域适应(UDA)解决了将在有标签源域上训练的模型适应于无标签目标域的难题,这对现实世界的应用至关重要。基于聚类的 UDA 方法中的一个重要问题是由于域间差异而产生的伪标签噪声,这会降低 reID 模型的性能。为了解决这个问题,我们提出了无监督双教师知识蒸馏(UDKD)方法,这是一种高效的学习方案,旨在增强人的重识别(reID)UDA方法对噪声伪标签的鲁棒性。所提出的 UDKD 方法将两个源训练分类器(教师)的输出结合起来,使用改进的基于软三重损失的度量学习方法训练第三个分类器(学生)。此外,还采用了加权平均技术来纠正教师网络生成的预测标签中的噪声。实验结果表明,所提出的 UDKD 在平均精度(mAP)和累积匹配特性曲线(排名 1、5 和 10)方面都有显著提高。具体来说,UDKD 在 Duke to Market 和 Market to Duke 场景中的 mAP 分别达到 84.57 和 73.32,Rank 1 分数分别达到 94.34 和 88.26。这些结果超越了最先进的性能,凸显了 UDKD 在推进用于人员再识别的 UDA 技术方面的功效,并突出了其在实际应用中提高性能和鲁棒性的潜力。
{"title":"Unsupervised dual-teacher knowledge distillation for pseudo-label refinement in domain adaptive person re-identification","authors":"Sidharth Samanta, Debasish Jena, Suvendu Rup","doi":"10.1007/s11042-024-20147-5","DOIUrl":"https://doi.org/10.1007/s11042-024-20147-5","url":null,"abstract":"<p>Unsupervised Domain Adaptation (UDA) in person re-identification (reID) addresses the challenge of adapting models trained on labeled source domains to unlabeled target domains, which is crucial for real-world applications. A significant problem in clustering-based UDA methods is the noise in pseudo-labels generated due to inter-domain disparities, which can degrade the performance of reID models. To address this issue, we propose the Unsupervised Dual-Teacher Knowledge Distillation (UDKD), an efficient learning scheme designed to enhance robustness against noisy pseudo-labels in UDA for person reID. The proposed UDKD method combines the outputs of two source-trained classifiers (teachers) to train a third classifier (student) using a modified soft-triplet loss-based metric learning approach. Additionally, a weighted averaging technique is employed to rectify the noise in the predicted labels generated from the teacher networks. Experimental results demonstrate that the proposed UDKD significantly improves performance in terms of mean Average Precision (mAP) and Cumulative Match Characteristic curve (Rank 1, 5, and 10). Specifically, UDKD achieves an mAP of <b>84.57</b> and <b>73.32</b>, and Rank 1 scores of <b>94.34</b> and <b>88.26</b> for Duke to Market and Market to Duke scenarios, respectively. These results surpass the state-of-the-art performance, underscoring the efficacy of UDKD in advancing UDA techniques for person reID and highlighting its potential to enhance performance and robustness in real-world applications.</p>","PeriodicalId":18770,"journal":{"name":"Multimedia Tools and Applications","volume":"8 1","pages":""},"PeriodicalIF":3.6,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142203573","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Multimedia Tools and Applications
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1