首页 > 最新文献

Computers, materials & continua最新文献

英文 中文
Image to Image Translation Based on Differential Image Pix2Pix Model 基于差分图像Pix2Pix模型的图像间转换
Pub Date : 2023-01-01 DOI: 10.32604/cmc.2023.041479
Xi Zhao, Haizheng Yu, Hong Bian
In recent years, Pix2Pix, a model within the domain of GANs, has found widespread application in the field of image-to-image translation. However, traditional Pix2Pix models suffer from significant drawbacks in image generation, such as the loss of important information features during the encoding and decoding processes, as well as a lack of constraints during the training process. To address these issues and improve the quality of Pix2Pix-generated images, this paper introduces two key enhancements. Firstly, to reduce information loss during encoding and decoding, we utilize the U-Net++ network as the generator for the Pix2Pix model, incorporating denser skip-connection to minimize information loss. Secondly, to enhance constraints during image generation, we introduce a specialized discriminator designed to distinguish differential images, further enhancing the quality of the generated images. We conducted experiments on the facades dataset and the sketch portrait dataset from the Chinese University of Hong Kong to validate our proposed model. The experimental results demonstrate that our improved Pix2Pix model significantly enhances image quality and outperforms other models in the selected metrics. Notably, the Pix2Pix model incorporating the differential image discriminator exhibits the most substantial improvements across all metrics. An analysis of the experimental results reveals that the use of the U-Net++ generator effectively reduces information feature loss, while the Pix2Pix model incorporating the differential image discriminator enhances the supervision of the generator during training. Both of these enhancements collectively improve the quality of Pix2Pix-generated images.
近年来,gan领域的Pix2Pix模型在图像到图像的翻译领域得到了广泛的应用。然而,传统的Pix2Pix模型在图像生成方面存在明显的缺陷,例如在编码和解码过程中丢失了重要的信息特征,以及在训练过程中缺乏约束。为了解决这些问题并提高pix2pix生成的图像的质量,本文介绍了两个关键的增强功能。首先,为了减少编码和解码过程中的信息丢失,我们利用U-Net++网络作为Pix2Pix模型的生成器,并结合更密集的跳过连接来减少信息丢失。其次,为了增强图像生成过程中的约束,我们引入了专门的判别器来区分差分图像,进一步提高了生成图像的质量。我们对来自香港中文大学的立面数据集和草图肖像数据集进行了实验,以验证我们提出的模型。实验结果表明,改进的Pix2Pix模型显著提高了图像质量,并在选定的指标上优于其他模型。值得注意的是,包含微分图像鉴别器的Pix2Pix模型在所有指标上都表现出最显著的改进。实验结果分析表明,使用U-Net++生成器有效地减少了信息特征的损失,而结合差分图像鉴别器的Pix2Pix模型在训练过程中增强了对生成器的监督。这两种增强功能共同提高了pix2pix生成的图像的质量。
{"title":"Image to Image Translation Based on Differential Image Pix2Pix Model","authors":"Xi Zhao, Haizheng Yu, Hong Bian","doi":"10.32604/cmc.2023.041479","DOIUrl":"https://doi.org/10.32604/cmc.2023.041479","url":null,"abstract":"In recent years, Pix2Pix, a model within the domain of GANs, has found widespread application in the field of image-to-image translation. However, traditional Pix2Pix models suffer from significant drawbacks in image generation, such as the loss of important information features during the encoding and decoding processes, as well as a lack of constraints during the training process. To address these issues and improve the quality of Pix2Pix-generated images, this paper introduces two key enhancements. Firstly, to reduce information loss during encoding and decoding, we utilize the U-Net++ network as the generator for the Pix2Pix model, incorporating denser skip-connection to minimize information loss. Secondly, to enhance constraints during image generation, we introduce a specialized discriminator designed to distinguish differential images, further enhancing the quality of the generated images. We conducted experiments on the facades dataset and the sketch portrait dataset from the Chinese University of Hong Kong to validate our proposed model. The experimental results demonstrate that our improved Pix2Pix model significantly enhances image quality and outperforms other models in the selected metrics. Notably, the Pix2Pix model incorporating the differential image discriminator exhibits the most substantial improvements across all metrics. An analysis of the experimental results reveals that the use of the U-Net++ generator effectively reduces information feature loss, while the Pix2Pix model incorporating the differential image discriminator enhances the supervision of the generator during training. Both of these enhancements collectively improve the quality of Pix2Pix-generated images.","PeriodicalId":93535,"journal":{"name":"Computers, materials & continua","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135317111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Using Speaker-Specific Emotion Representations in Wav2vec 2.0-Based Modules for Speech Emotion Recognition 在基于Wav2vec 2.0的语音情感识别模块中使用说话人特定的情感表示
Pub Date : 2023-01-01 DOI: 10.32604/cmc.2023.041332
Somin Park, Mpabulungi Mark, Bogyung Park, Hyunki Hong
Speech emotion recognition is essential for frictionless human-machine interaction, where machines respond to human instructions with context-aware actions. The properties of individuals’ voices vary with culture, language, gender, and personality. These variations in speaker-specific properties may hamper the performance of standard representations in downstream tasks such as speech emotion recognition (SER). This study demonstrates the significance of speaker-specific speech characteristics and how considering them can be leveraged to improve the performance of SER models. In the proposed approach, two wav2vec-based modules (a speaker-identification network and an emotion classification network) are trained with the Arcface loss. The speaker-identification network has a single attention block to encode an input audio waveform into a speaker-specific representation. The emotion classification network uses a wav2vec 2.0-backbone as well as four attention blocks to encode the same input audio waveform into an emotion representation. These two representations are then fused into a single vector representation containing emotion and speaker-specific information. Experimental results showed that the use of speaker-specific characteristics improves SER performance. Additionally, combining these with an angular marginal loss such as the Arcface loss improves intra-class compactness while increasing inter-class separability, as demonstrated by the plots of t-distributed stochastic neighbor embeddings (t-SNE). The proposed approach outperforms previous methods using similar training strategies, with a weighted accuracy (WA) of 72.14% and unweighted accuracy (UA) of 72.97% on the Interactive Emotional Dynamic Motion Capture (IEMOCAP) dataset. This demonstrates its effectiveness and potential to enhance human-machine interaction through more accurate emotion recognition in speech.
语音情感识别对于无摩擦的人机交互至关重要,在这种交互中,机器通过上下文感知动作响应人类指令。个体声音的属性随着文化、语言、性别和个性的不同而不同。说话者特定属性的这些变化可能会阻碍下游任务(如语音情感识别(SER))中标准表示的表现。本研究证明了说话人特定语音特征的重要性,以及如何利用这些特征来提高SER模型的性能。在提出的方法中,使用Arcface损失训练两个基于wav2vec的模块(说话人识别网络和情绪分类网络)。说话人识别网络具有单个注意块,用于将输入音频波形编码为说话人特定表示。情绪分类网络使用wav2vec 2.0骨干网和四个注意块将相同的输入音频波形编码为情绪表示。然后将这两种表示融合成一个包含情感和说话人特定信息的向量表示。实验结果表明,使用特定说话人的特征可以提高语音识别性能。此外,如t分布随机邻居嵌入(t-SNE)图所示,将这些与角边际损失(如Arcface损失)相结合可以提高类内紧密性,同时增加类间可分离性。该方法优于先前使用类似训练策略的方法,在交互式情绪动态动作捕捉(IEMOCAP)数据集上的加权精度(WA)为72.14%,非加权精度(UA)为72.97%。这表明了它通过更准确的语音情感识别来增强人机交互的有效性和潜力。
{"title":"Using Speaker-Specific Emotion Representations in Wav2vec 2.0-Based Modules for Speech Emotion Recognition","authors":"Somin Park, Mpabulungi Mark, Bogyung Park, Hyunki Hong","doi":"10.32604/cmc.2023.041332","DOIUrl":"https://doi.org/10.32604/cmc.2023.041332","url":null,"abstract":"Speech emotion recognition is essential for frictionless human-machine interaction, where machines respond to human instructions with context-aware actions. The properties of individuals’ voices vary with culture, language, gender, and personality. These variations in speaker-specific properties may hamper the performance of standard representations in downstream tasks such as speech emotion recognition (SER). This study demonstrates the significance of speaker-specific speech characteristics and how considering them can be leveraged to improve the performance of SER models. In the proposed approach, two wav2vec-based modules (a speaker-identification network and an emotion classification network) are trained with the Arcface loss. The speaker-identification network has a single attention block to encode an input audio waveform into a speaker-specific representation. The emotion classification network uses a wav2vec 2.0-backbone as well as four attention blocks to encode the same input audio waveform into an emotion representation. These two representations are then fused into a single vector representation containing emotion and speaker-specific information. Experimental results showed that the use of speaker-specific characteristics improves SER performance. Additionally, combining these with an angular marginal loss such as the Arcface loss improves intra-class compactness while increasing inter-class separability, as demonstrated by the plots of t-distributed stochastic neighbor embeddings (t-SNE). The proposed approach outperforms previous methods using similar training strategies, with a weighted accuracy (WA) of 72.14% and unweighted accuracy (UA) of 72.97% on the Interactive Emotional Dynamic Motion Capture (IEMOCAP) dataset. This demonstrates its effectiveness and potential to enhance human-machine interaction through more accurate emotion recognition in speech.","PeriodicalId":93535,"journal":{"name":"Computers, materials & continua","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135317500","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Collaborative Detection and Prevention of Sybil Attacks against RPL-Based Internet of Things 基于rpl的物联网Sybil攻击协同检测与防范
Pub Date : 2023-01-01 DOI: 10.32604/cmc.2023.040756
Muhammad Ali Khan, Rao Naveed Bin Rais, Osman Khalid
The Internet of Things (IoT) comprises numerous resource-constrained devices that generate large volumes of data. The inherent vulnerabilities in IoT infrastructure, such as easily spoofed IP and MAC addresses, pose significant security challenges. Traditional routing protocols designed for wired or wireless networks may not be suitable for IoT networks due to their limitations. Therefore, the Routing Protocol for Low-Power and Lossy Networks (RPL) is widely used in IoT systems. However, the built-in security mechanism of RPL is inadequate in defending against sophisticated routing attacks, including Sybil attacks. To address these issues, this paper proposes a centralized and collaborative approach for securing RPL-based IoT against Sybil attacks. The proposed approach consists of detection and prevention algorithms based on the Random Password Generation and comparison methodology (RPG). The detection algorithm verifies the passwords of communicating nodes before comparing their keys and constant IDs, while the prevention algorithm utilizes a delivery delay ratio to restrict the participation of sensor nodes in communication. Through simulations, it is demonstrated that the proposed approach achieves better results compared to distributed defense mechanisms in terms of throughput, average delivery delay and detection rate. Moreover, the proposed countermeasure effectively mitigates brute-force and side-channel attacks in addition to Sybil attacks. The findings suggest that implementing the RPG-based detection and prevention algorithms can provide robust security for RPL-based IoT networks.
物联网(IoT)由许多资源受限的设备组成,这些设备会产生大量数据。物联网基础设施固有的漏洞,如容易欺骗的IP和MAC地址,构成了重大的安全挑战。为有线或无线网络设计的传统路由协议由于其局限性可能不适合物联网网络。因此,低功耗和有损网络路由协议(RPL)在物联网系统中得到了广泛的应用。但是,RPL内置的安全机制不足以防御复杂的路由攻击,包括Sybil攻击。为了解决这些问题,本文提出了一种集中和协作的方法来保护基于rpl的物联网免受Sybil攻击。该方法由基于随机密码生成和比较方法(RPG)的检测和预防算法组成。检测算法先对通信节点的密码进行验证,然后比较它们的密钥和常数id;预防算法利用传递延迟比来限制传感器节点参与通信。仿真结果表明,与分布式防御机制相比,该方法在吞吐量、平均投递延迟和检测率方面取得了更好的效果。此外,除了Sybil攻击外,所提出的对策还有效地减轻了暴力破解和侧信道攻击。研究结果表明,实施基于rpg的检测和预防算法可以为基于rpl的物联网网络提供强大的安全性。
{"title":"Collaborative Detection and Prevention of Sybil Attacks against RPL-Based Internet of Things","authors":"Muhammad Ali Khan, Rao Naveed Bin Rais, Osman Khalid","doi":"10.32604/cmc.2023.040756","DOIUrl":"https://doi.org/10.32604/cmc.2023.040756","url":null,"abstract":"The Internet of Things (IoT) comprises numerous resource-constrained devices that generate large volumes of data. The inherent vulnerabilities in IoT infrastructure, such as easily spoofed IP and MAC addresses, pose significant security challenges. Traditional routing protocols designed for wired or wireless networks may not be suitable for IoT networks due to their limitations. Therefore, the Routing Protocol for Low-Power and Lossy Networks (RPL) is widely used in IoT systems. However, the built-in security mechanism of RPL is inadequate in defending against sophisticated routing attacks, including Sybil attacks. To address these issues, this paper proposes a centralized and collaborative approach for securing RPL-based IoT against Sybil attacks. The proposed approach consists of detection and prevention algorithms based on the Random Password Generation and comparison methodology (RPG). The detection algorithm verifies the passwords of communicating nodes before comparing their keys and constant IDs, while the prevention algorithm utilizes a delivery delay ratio to restrict the participation of sensor nodes in communication. Through simulations, it is demonstrated that the proposed approach achieves better results compared to distributed defense mechanisms in terms of throughput, average delivery delay and detection rate. Moreover, the proposed countermeasure effectively mitigates brute-force and side-channel attacks in addition to Sybil attacks. The findings suggest that implementing the RPG-based detection and prevention algorithms can provide robust security for RPL-based IoT networks.","PeriodicalId":93535,"journal":{"name":"Computers, materials & continua","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135317509","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Ontology-Based Crime News Semantic Retrieval System 基于本体的犯罪新闻语义检索系统
Pub Date : 2023-01-01 DOI: 10.32604/cmc.2023.036074
Fiaz Majeed, Afzaal Ahmad, Muhammad Awais Hassan, Muhammad Shafiq, Jin-Ghoo Choi, Habib Hamam
Every day, the media reports tons of crimes that are considered by a large number of users and accumulate on a regular basis. Crime news exists on the Internet in unstructured formats such as books, websites, documents, and journals. From such homogeneous data, it is very challenging to extract relevant information which is a time-consuming and critical task for the public and law enforcement agencies. Keyword-based Information Retrieval (IR) systems rely on statistics to retrieve results, making it difficult to obtain relevant results. They are unable to understand the user's query and thus face word mismatches due to context changes and the inevitable semantics of a given word. Therefore, such datasets need to be organized in a structured configuration, with the goal of efficiently manipulating the data while respecting the semantics of the data. An ontological semantic IR system is needed that can find the right investigative information and find important clues to solve criminal cases. The semantic system retrieves information in view of the similarity of the semantics among indexed data and user queries. In this paper, we develop an ontology-based semantic IR system that leverages the latest semantic technologies including resource description framework (RDF), semantic protocol and RDF query language (SPARQL), semantic web rule language (SWRL), and web ontology language (OWL). We have conducted two experiments. In the first experiment, we implemented a keyword-based textual IR system using Apache Lucene. In the second experiment, we implemented a semantic system that uses ontology to store the data and retrieve precise results with high accuracy using SPARQL queries. The keyword-based system has filtered results with 51% accuracy, while the semantic system has filtered results with 95% accuracy, leading to significant improvements in the field and opening up new horizons for researchers.
每天,媒体都会报道大量的犯罪,这些犯罪被大量的用户认为是有规律的积累。犯罪新闻以非结构化的形式存在于互联网上,如书籍、网站、文档和期刊。从这些同质数据中提取相关信息非常具有挑战性,这对公众和执法机构来说是一项耗时且关键的任务。基于关键字的信息检索(IR)系统依赖于统计数据来检索结果,难以获得相关的结果。他们无法理解用户的查询,因此由于上下文的变化和给定单词的不可避免的语义而面临单词不匹配。因此,这些数据集需要在结构化配置中进行组织,其目标是有效地操作数据,同时尊重数据的语义。需要一个本体语义IR系统,能够找到正确的侦查信息,找到重要的线索来解决刑事案件。语义系统根据索引数据和用户查询之间的语义相似性来检索信息。本文利用最新的语义技术,包括资源描述框架(RDF)、语义协议和RDF查询语言(SPARQL)、语义web规则语言(SWRL)和web本体语言(OWL),开发了一个基于本体的语义IR系统。我们进行了两次实验。在第一个实验中,我们使用Apache Lucene实现了一个基于关键字的文本IR系统。在第二个实验中,我们实现了一个语义系统,该系统使用本体存储数据,并使用SPARQL查询以高精度检索精确结果。基于关键字的系统过滤结果的准确率为51%,而语义系统过滤结果的准确率为95%,导致该领域的显着改进,为研究人员开辟了新的视野。
{"title":"Ontology-Based Crime News Semantic Retrieval System","authors":"Fiaz Majeed, Afzaal Ahmad, Muhammad Awais Hassan, Muhammad Shafiq, Jin-Ghoo Choi, Habib Hamam","doi":"10.32604/cmc.2023.036074","DOIUrl":"https://doi.org/10.32604/cmc.2023.036074","url":null,"abstract":"Every day, the media reports tons of crimes that are considered by a large number of users and accumulate on a regular basis. Crime news exists on the Internet in unstructured formats such as books, websites, documents, and journals. From such homogeneous data, it is very challenging to extract relevant information which is a time-consuming and critical task for the public and law enforcement agencies. Keyword-based Information Retrieval (IR) systems rely on statistics to retrieve results, making it difficult to obtain relevant results. They are unable to understand the user's query and thus face word mismatches due to context changes and the inevitable semantics of a given word. Therefore, such datasets need to be organized in a structured configuration, with the goal of efficiently manipulating the data while respecting the semantics of the data. An ontological semantic IR system is needed that can find the right investigative information and find important clues to solve criminal cases. The semantic system retrieves information in view of the similarity of the semantics among indexed data and user queries. In this paper, we develop an ontology-based semantic IR system that leverages the latest semantic technologies including resource description framework (RDF), semantic protocol and RDF query language (SPARQL), semantic web rule language (SWRL), and web ontology language (OWL). We have conducted two experiments. In the first experiment, we implemented a keyword-based textual IR system using Apache Lucene. In the second experiment, we implemented a semantic system that uses ontology to store the data and retrieve precise results with high accuracy using SPARQL queries. The keyword-based system has filtered results with 51% accuracy, while the semantic system has filtered results with 95% accuracy, leading to significant improvements in the field and opening up new horizons for researchers.","PeriodicalId":93535,"journal":{"name":"Computers, materials & continua","volume":"101 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135317686","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Scalable Interconnection Scheme in Many-Core Systems 多核系统中的可扩展互联方案
Pub Date : 2023-01-01 DOI: 10.32604/cmc.2023.038810
Allam Abumwais, Mujahed Eleyat
Recent architectures of multi-core systems may have a relatively large number of cores that typically ranges from tens to hundreds; therefore called many-core systems. Such systems require an efficient interconnection network that tries to address two major problems. First, the overhead of power and area cost and its effect on scalability. Second, high access latency is caused by multiple cores’ simultaneous accesses of the same shared module. This paper presents an interconnection scheme called N-conjugate Shuffle Clusters (NCSC) based on multi-core multi-cluster architecture to reduce the overhead of the just mentioned problems. NCSC eliminated the need for router devices and their complexity and hence reduced the power and area costs. It also resigned and distributed the shared caches across the interconnection network to increase the ability for simultaneous access and hence reduce the access latency. For intra-cluster communication, Multi-port Content Addressable Memory (MPCAM) is used. The experimental results using four clusters and four cores each indicated that the average access latency for a write process is 1.14785 ± 0.04532 ns which is nearly equal to the latency of a write operation in MPCAM. Moreover, it was demonstrated that the average read latency within a cluster is 1.26226 ± 0.090591 ns and around 1.92738 ± 0.139588 ns for read access between cores from different clusters.
最新的多核系统架构可能具有相对较大的核数,通常在数十到数百之间;因此称为多核系统。这样的系统需要一个有效的互连网络,试图解决两个主要问题。首先,功率和面积成本的开销及其对可扩展性的影响。二是多核同时访问同一个共享模块,导致访问时延高。本文提出了一种基于多核多集群架构的n共轭Shuffle集群(NCSC)互连方案,以减少上述问题的开销。NCSC消除了对路由器设备及其复杂性的需求,从而降低了功率和面积成本。它还在互连网络中放弃并分发共享缓存,以增加同时访问的能力,从而减少访问延迟。对于集群内部通信,使用多端口内容可寻址内存(MPCAM)。使用4个集群和4个内核的实验结果表明,一个写过程的平均访问延迟为1.14785±0.04532 ns,几乎等于MPCAM中一个写操作的延迟。结果表明,集群内的平均读时延为1.26226±0.090591 ns,不同集群内核之间的读时延约为1.92738±0.139588 ns。
{"title":"A Scalable Interconnection Scheme in Many-Core Systems","authors":"Allam Abumwais, Mujahed Eleyat","doi":"10.32604/cmc.2023.038810","DOIUrl":"https://doi.org/10.32604/cmc.2023.038810","url":null,"abstract":"Recent architectures of multi-core systems may have a relatively large number of cores that typically ranges from tens to hundreds; therefore called many-core systems. Such systems require an efficient interconnection network that tries to address two major problems. First, the overhead of power and area cost and its effect on scalability. Second, high access latency is caused by multiple cores’ simultaneous accesses of the same shared module. This paper presents an interconnection scheme called N-conjugate Shuffle Clusters (NCSC) based on multi-core multi-cluster architecture to reduce the overhead of the just mentioned problems. NCSC eliminated the need for router devices and their complexity and hence reduced the power and area costs. It also resigned and distributed the shared caches across the interconnection network to increase the ability for simultaneous access and hence reduce the access latency. For intra-cluster communication, Multi-port Content Addressable Memory (MPCAM) is used. The experimental results using four clusters and four cores each indicated that the average access latency for a write process is 1.14785 ± 0.04532 ns which is nearly equal to the latency of a write operation in MPCAM. Moreover, it was demonstrated that the average read latency within a cluster is 1.26226 ± 0.090591 ns and around 1.92738 ± 0.139588 ns for read access between cores from different clusters.","PeriodicalId":93535,"journal":{"name":"Computers, materials & continua","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135317689","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unweighted Voting Method to Detect Sinkhole Attack in RPL-Based Internet of Things Networks 基于rpl的物联网网络天坑攻击检测的非加权投票方法
Pub Date : 2023-01-01 DOI: 10.32604/cmc.2023.041108
Shadi Al-Sarawi, Mohammed Anbar, Basim Ahmad Alabsi, Mohammad Adnan Aladaileh, Shaza Dawood Ahmed Rihan
The Internet of Things (IoT) consists of interconnected smart devices communicating and collecting data. The Routing Protocol for Low-Power and Lossy Networks (RPL) is the standard protocol for Internet Protocol Version 6 (IPv6) in the IoT. However, RPL is vulnerable to various attacks, including the sinkhole attack, which disrupts the network by manipulating routing information. This paper proposes the Unweighted Voting Method (UVM) for sinkhole node identification, utilizing three key behavioral indicators: DODAG Information Object (DIO) Transaction Frequency, Rank Harmony, and Power Consumption. These indicators have been carefully selected based on their contribution to sinkhole attack detection and other relevant features used in previous research. The UVM method employs an unweighted voting mechanism, where each voter or rule holds equal weight in detecting the presence of a sinkhole attack based on the proposed indicators. The effectiveness of the UVM method is evaluated using the COOJA simulator and compared with existing approaches. Notably, the proposed approach fulfills power consumption requirements for constrained nodes without increasing consumption due to the deployment design. In terms of detection accuracy, simulation results demonstrate a high detection rate ranging from 90% to 100%, with a low false-positive rate of 0% to 0.2%. Consequently, the proposed approach surpasses Ensemble Learning Intrusion Detection Systems by leveraging three indicators and three supporting rules.
物联网(IoT)由相互连接的智能设备组成,可以进行通信和收集数据。低功耗和有损网络路由协议(RPL)是物联网中互联网协议版本6 (IPv6)的标准协议。但是,RPL容易受到各种攻击,包括通过操纵路由信息来破坏网络的天坑攻击(sinkhole attack)。本文利用DODAG信息对象(DIO)交易频率、等级和谐度和功耗三个关键行为指标,提出了一种用于天坑节点识别的非加权投票方法(UVM)。这些指标是根据它们对天坑攻击检测的贡献和先前研究中使用的其他相关特征精心选择的。UVM方法采用非加权投票机制,其中每个投票人或规则在基于提议的指标检测天坑攻击是否存在方面具有相同的权重。利用COOJA模拟器对UVM方法的有效性进行了评估,并与现有方法进行了比较。值得注意的是,所提出的方法满足了约束节点的功耗要求,而不会由于部署设计而增加功耗。在检测精度方面,仿真结果表明,检测率在90% ~ 100%之间,假阳性率在0% ~ 0.2%之间。因此,该方法通过利用三个指标和三个支持规则,超越了集成学习入侵检测系统。
{"title":"Unweighted Voting Method to Detect Sinkhole Attack in RPL-Based Internet of Things Networks","authors":"Shadi Al-Sarawi, Mohammed Anbar, Basim Ahmad Alabsi, Mohammad Adnan Aladaileh, Shaza Dawood Ahmed Rihan","doi":"10.32604/cmc.2023.041108","DOIUrl":"https://doi.org/10.32604/cmc.2023.041108","url":null,"abstract":"The Internet of Things (IoT) consists of interconnected smart devices communicating and collecting data. The Routing Protocol for Low-Power and Lossy Networks (RPL) is the standard protocol for Internet Protocol Version 6 (IPv6) in the IoT. However, RPL is vulnerable to various attacks, including the sinkhole attack, which disrupts the network by manipulating routing information. This paper proposes the Unweighted Voting Method (UVM) for sinkhole node identification, utilizing three key behavioral indicators: DODAG Information Object (DIO) Transaction Frequency, Rank Harmony, and Power Consumption. These indicators have been carefully selected based on their contribution to sinkhole attack detection and other relevant features used in previous research. The UVM method employs an unweighted voting mechanism, where each voter or rule holds equal weight in detecting the presence of a sinkhole attack based on the proposed indicators. The effectiveness of the UVM method is evaluated using the COOJA simulator and compared with existing approaches. Notably, the proposed approach fulfills power consumption requirements for constrained nodes without increasing consumption due to the deployment design. In terms of detection accuracy, simulation results demonstrate a high detection rate ranging from 90% to 100%, with a low false-positive rate of 0% to 0.2%. Consequently, the proposed approach surpasses Ensemble Learning Intrusion Detection Systems by leveraging three indicators and three supporting rules.","PeriodicalId":93535,"journal":{"name":"Computers, materials & continua","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135317877","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-Modal Scene Matching Location Algorithm Based on M2Det 基于M2Det的多模态场景匹配定位算法
Pub Date : 2023-01-01 DOI: 10.32604/cmc.2023.039582
Jiwei Fan, Xiaogang Yang, Ruitao Lu, Qingge Li, Siyu Wang
In recent years, many visual positioning algorithms have been proposed based on computer vision and they have achieved good results. However, these algorithms have a single function, cannot perceive the environment, and have poor versatility, and there is a certain mismatch phenomenon, which affects the positioning accuracy. Therefore, this paper proposes a location algorithm that combines a target recognition algorithm with a depth feature matching algorithm to solve the problem of unmanned aerial vehicle (UAV) environment perception and multi-modal image-matching fusion location. This algorithm was based on the single-shot object detector based on multi-level feature pyramid network (M2Det) algorithm and replaced the original visual geometry group (VGG) feature extraction network with the ResNet-101 network to improve the feature extraction capability of the network model. By introducing a depth feature matching algorithm, the algorithm shares neural network weights and realizes the design of UAV target recognition and a multi-modal image-matching fusion positioning algorithm. When the reference image and the real-time image were mismatched, the dynamic adaptive proportional constraint and the random sample consensus consistency algorithm (DAPC-RANSAC) were used to optimize the matching results to improve the correct matching efficiency of the target. Using the multi-modal registration data set, the proposed algorithm was compared and analyzed to verify its superiority and feasibility. The results show that the algorithm proposed in this paper can effectively deal with the matching between multi-modal images (visible image–infrared image, infrared image–satellite image, visible image–satellite image), and the contrast, scale, brightness, ambiguity deformation, and other changes had good stability and robustness. Finally, the effectiveness and practicability of the algorithm proposed in this paper were verified in an aerial test scene of an S1000 six-rotor UAV.
近年来,人们提出了许多基于计算机视觉的视觉定位算法,并取得了良好的效果。但这些算法功能单一,不能感知环境,通用性差,存在一定的错配现象,影响定位精度。因此,本文提出了一种将目标识别算法与深度特征匹配算法相结合的定位算法,用于解决无人机环境感知与多模态图像匹配融合定位问题。该算法基于基于多级特征金字塔网络的单镜头目标检测器(M2Det)算法,用ResNet-101网络代替原有的视觉几何组(VGG)特征提取网络,提高了网络模型的特征提取能力。该算法通过引入深度特征匹配算法,共享神经网络权值,实现了无人机目标识别和多模态图像匹配融合定位算法的设计。当参考图像与实时图像不匹配时,采用动态自适应比例约束和随机样本一致性一致性算法(DAPC-RANSAC)对匹配结果进行优化,提高目标的正确匹配效率。利用多模态配准数据集,对该算法进行了对比分析,验证了该算法的优越性和可行性。结果表明,本文提出的算法能够有效地处理多模态图像(可见光图像-红外图像、红外图像-卫星图像、可见光图像-卫星图像)之间的匹配,且对比度、比例、亮度、模糊变形等变化具有良好的稳定性和鲁棒性。最后,在S1000型六旋翼无人机航测场景中验证了本文算法的有效性和实用性。
{"title":"Multi-Modal Scene Matching Location Algorithm Based on M2Det","authors":"Jiwei Fan, Xiaogang Yang, Ruitao Lu, Qingge Li, Siyu Wang","doi":"10.32604/cmc.2023.039582","DOIUrl":"https://doi.org/10.32604/cmc.2023.039582","url":null,"abstract":"In recent years, many visual positioning algorithms have been proposed based on computer vision and they have achieved good results. However, these algorithms have a single function, cannot perceive the environment, and have poor versatility, and there is a certain mismatch phenomenon, which affects the positioning accuracy. Therefore, this paper proposes a location algorithm that combines a target recognition algorithm with a depth feature matching algorithm to solve the problem of unmanned aerial vehicle (UAV) environment perception and multi-modal image-matching fusion location. This algorithm was based on the single-shot object detector based on multi-level feature pyramid network (M2Det) algorithm and replaced the original visual geometry group (VGG) feature extraction network with the ResNet-101 network to improve the feature extraction capability of the network model. By introducing a depth feature matching algorithm, the algorithm shares neural network weights and realizes the design of UAV target recognition and a multi-modal image-matching fusion positioning algorithm. When the reference image and the real-time image were mismatched, the dynamic adaptive proportional constraint and the random sample consensus consistency algorithm (DAPC-RANSAC) were used to optimize the matching results to improve the correct matching efficiency of the target. Using the multi-modal registration data set, the proposed algorithm was compared and analyzed to verify its superiority and feasibility. The results show that the algorithm proposed in this paper can effectively deal with the matching between multi-modal images (visible image–infrared image, infrared image–satellite image, visible image–satellite image), and the contrast, scale, brightness, ambiguity deformation, and other changes had good stability and robustness. Finally, the effectiveness and practicability of the algorithm proposed in this paper were verified in an aerial test scene of an S1000 six-rotor UAV.","PeriodicalId":93535,"journal":{"name":"Computers, materials & continua","volume":"2015 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135317113","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-Modal Military Event Extraction Based on Knowledge Fusion 基于知识融合的多模态军事事件提取
Pub Date : 2023-01-01 DOI: 10.32604/cmc.2023.040751
Yuyuan Xiang, Yangli Jia, Xiangliang Zhang, Zhenling Zhang
Event extraction stands as a significant endeavor within the realm of information extraction, aspiring to automatically extract structured event information from vast volumes of unstructured text. Extracting event elements from multi-modal data remains a challenging task due to the presence of a large number of images and overlapping event elements in the data. Although researchers have proposed various methods to accomplish this task, most existing event extraction models cannot address these challenges because they are only applicable to text scenarios. To solve the above issues, this paper proposes a multi-modal event extraction method based on knowledge fusion. Specifically, for event-type recognition, we use a meticulous pipeline approach that integrates multiple pre-trained models. This approach enables a more comprehensive capture of the multidimensional event semantic features present in military texts, thereby enhancing the interconnectedness of information between trigger words and events. For event element extraction, we propose a method for constructing a priori templates that combine event types with corresponding trigger words. This approach facilitates the acquisition of fine-grained input samples containing event trigger words, thus enabling the model to understand the semantic relationships between elements in greater depth. Furthermore, a fusion method for spatial mapping of textual event elements and image elements is proposed to reduce the category number overload and effectively achieve multi-modal knowledge fusion. The experimental results based on the CCKS 2022 dataset show that our method has achieved competitive results, with a comprehensive evaluation value F1-score of 53.4% for the model. These results validate the effectiveness of our method in extracting event elements from multi-modal data.
事件提取是信息提取领域中的一项重要工作,旨在从大量非结构化文本中自动提取结构化事件信息。由于数据中存在大量图像和重叠的事件元素,从多模态数据中提取事件元素仍然是一项具有挑战性的任务。尽管研究人员提出了各种方法来完成这项任务,但大多数现有的事件提取模型都不能解决这些挑战,因为它们只适用于文本场景。针对上述问题,本文提出了一种基于知识融合的多模态事件提取方法。具体来说,对于事件类型识别,我们使用了一种精细的管道方法,该方法集成了多个预训练模型。这种方法能够更全面地捕获军事文本中存在的多维事件语义特征,从而增强触发词和事件之间信息的互联性。对于事件元素提取,我们提出了一种构造先验模板的方法,该模板将事件类型与相应的触发词组合在一起。这种方法有助于获取包含事件触发词的细粒度输入样本,从而使模型能够更深入地理解元素之间的语义关系。在此基础上,提出了一种文本事件元素与图像元素空间映射的融合方法,以减少类别数过载,有效实现多模态知识融合。基于CCKS 2022数据集的实验结果表明,我们的方法取得了较好的效果,模型的综合评价值F1-score为53.4%。这些结果验证了该方法从多模态数据中提取事件元素的有效性。
{"title":"Multi-Modal Military Event Extraction Based on Knowledge Fusion","authors":"Yuyuan Xiang, Yangli Jia, Xiangliang Zhang, Zhenling Zhang","doi":"10.32604/cmc.2023.040751","DOIUrl":"https://doi.org/10.32604/cmc.2023.040751","url":null,"abstract":"Event extraction stands as a significant endeavor within the realm of information extraction, aspiring to automatically extract structured event information from vast volumes of unstructured text. Extracting event elements from multi-modal data remains a challenging task due to the presence of a large number of images and overlapping event elements in the data. Although researchers have proposed various methods to accomplish this task, most existing event extraction models cannot address these challenges because they are only applicable to text scenarios. To solve the above issues, this paper proposes a multi-modal event extraction method based on knowledge fusion. Specifically, for event-type recognition, we use a meticulous pipeline approach that integrates multiple pre-trained models. This approach enables a more comprehensive capture of the multidimensional event semantic features present in military texts, thereby enhancing the interconnectedness of information between trigger words and events. For event element extraction, we propose a method for constructing a priori templates that combine event types with corresponding trigger words. This approach facilitates the acquisition of fine-grained input samples containing event trigger words, thus enabling the model to understand the semantic relationships between elements in greater depth. Furthermore, a fusion method for spatial mapping of textual event elements and image elements is proposed to reduce the category number overload and effectively achieve multi-modal knowledge fusion. The experimental results based on the CCKS 2022 dataset show that our method has achieved competitive results, with a comprehensive evaluation value F1-score of 53.4% for the model. These results validate the effectiveness of our method in extracting event elements from multi-modal data.","PeriodicalId":93535,"journal":{"name":"Computers, materials & continua","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135317115","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Linguistic Knowledge Representation in DPoS Consensus Scheme for Blockchain 区块链DPoS共识方案中的语言知识表示
Pub Date : 2023-01-01 DOI: 10.32604/cmc.2023.040970
Yixia Chen, Mingwei Lin
{"title":"Linguistic Knowledge Representation in DPoS Consensus Scheme for Blockchain","authors":"Yixia Chen, Mingwei Lin","doi":"10.32604/cmc.2023.040970","DOIUrl":"https://doi.org/10.32604/cmc.2023.040970","url":null,"abstract":"","PeriodicalId":93535,"journal":{"name":"Computers, materials & continua","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135650146","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
3-D Gait Identification Utilizing Latent Canonical Covariates Consisting of Gait Features 基于步态特征的潜在典型协变量的三维步态识别
Pub Date : 2023-01-01 DOI: 10.32604/cmc.2023.032069
Ramiz Gorkem Birdal, Ahmet Sertbas
Biometric gait recognition is a lesser-known but emerging and effective biometric recognition method which enables subjects’ walking patterns to be recognized. Existing research in this area has primarily focused on feature analysis through the extraction of individual features, which captures most of the information but fails to capture subtle variations in gait dynamics. Therefore, a novel feature taxonomy and an approach for deriving a relationship between a function of one set of gait features with another set are introduced. The gait features extracted from body halves divided by anatomical planes on vertical, horizontal, and diagonal axes are grouped to form canonical gait covariates. Canonical Correlation Analysis is utilized to measure the strength of association between the canonical covariates of gait. Thus, gait assessment and identification are enhanced when more semantic information is available through CCA-based multi-feature fusion. Hence, Carnegie Mellon University’s 3D gait database, which contains 32 gait samples taken at different paces, is utilized in analyzing gait characteristics. The performance of Linear Discriminant Analysis, K-Nearest Neighbors, Naive Bayes, Artificial Neural Networks, and Support Vector Machines was improved by a 4% average when the CCA-utilized gait identification approach was used. A significant maximum accuracy rate of 97.8% was achieved through CCA-based gait identification. Beyond that, the rate of false identifications and unrecognized gaits went down to half, demonstrating state-of-the-art for gait identification.
生物特征步态识别是一种鲜为人知但新兴的有效的生物特征识别方法,它可以识别受试者的行走模式。该领域的现有研究主要集中在通过提取个体特征来进行特征分析,这种方法捕获了大部分信息,但未能捕捉到步态动力学的细微变化。因此,介绍了一种新的特征分类法和一种推导一组步态特征函数与另一组步态特征函数之间关系的方法。将按垂直、水平和对角线解剖平面划分的身体半部分提取的步态特征分组,形成典型的步态协变量。典型相关分析用于测量步态典型协变量之间的关联强度。因此,通过基于ca的多特征融合,可以获得更多的语义信息,从而增强步态评估和识别。因此,我们利用卡内基梅隆大学的三维步态数据库对步态特征进行分析,该数据库包含32个不同步速的步态样本。当使用基于cca的步态识别方法时,线性判别分析、k近邻、朴素贝叶斯、人工神经网络和支持向量机的性能平均提高4%。基于cca的步态识别准确率最高可达97.8%。除此之外,错误识别和未识别步态的比率下降到一半,显示出步态识别的最新技术。
{"title":"3-D Gait Identification Utilizing Latent Canonical Covariates Consisting of Gait Features","authors":"Ramiz Gorkem Birdal, Ahmet Sertbas","doi":"10.32604/cmc.2023.032069","DOIUrl":"https://doi.org/10.32604/cmc.2023.032069","url":null,"abstract":"Biometric gait recognition is a lesser-known but emerging and effective biometric recognition method which enables subjects’ walking patterns to be recognized. Existing research in this area has primarily focused on feature analysis through the extraction of individual features, which captures most of the information but fails to capture subtle variations in gait dynamics. Therefore, a novel feature taxonomy and an approach for deriving a relationship between a function of one set of gait features with another set are introduced. The gait features extracted from body halves divided by anatomical planes on vertical, horizontal, and diagonal axes are grouped to form canonical gait covariates. Canonical Correlation Analysis is utilized to measure the strength of association between the canonical covariates of gait. Thus, gait assessment and identification are enhanced when more semantic information is available through CCA-based multi-feature fusion. Hence, Carnegie Mellon University’s 3D gait database, which contains 32 gait samples taken at different paces, is utilized in analyzing gait characteristics. The performance of Linear Discriminant Analysis, K-Nearest Neighbors, Naive Bayes, Artificial Neural Networks, and Support Vector Machines was improved by a 4% average when the CCA-utilized gait identification approach was used. A significant maximum accuracy rate of 97.8% was achieved through CCA-based gait identification. Beyond that, the rate of false identifications and unrecognized gaits went down to half, demonstrating state-of-the-art for gait identification.","PeriodicalId":93535,"journal":{"name":"Computers, materials & continua","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136052701","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computers, materials & continua
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1