首页 > 最新文献

Drones最新文献

英文 中文
MFEFNet: A Multi-Scale Feature Information Extraction and Fusion Network for Multi-Scale Object Detection in UAV Aerial Images MFEFNet:用于无人机航空图像中多尺度物体检测的多尺度特征信息提取与融合网络
Pub Date : 2024-05-08 DOI: 10.3390/drones8050186
Liming Zhou, Shuai Zhao, Ziye Wan, Yang Liu, Yadi Wang, Xianyu Zuo
Unmanned aerial vehicles (UAVs) are now widely used in many fields. Due to the randomness of UAV flight height and shooting angle, UAV images usually have the following characteristics: many small objects, large changes in object scale, and complex background. Therefore, object detection in UAV aerial images is a very challenging task. To address the challenges posed by these characteristics, this paper proposes a novel UAV image object detection method based on global feature aggregation and context feature extraction named the multi-scale feature information extraction and fusion network (MFEFNet). Specifically, first of all, to extract the feature information of objects more effectively from complex backgrounds, we propose an efficient spatial information extraction (SIEM) module, which combines residual connection to build long-distance feature dependencies and effectively extracts the most useful feature information by building contextual feature relations around objects. Secondly, to improve the feature fusion efficiency and reduce the burden brought by redundant feature fusion networks, we propose a global aggregation progressive feature fusion network (GAFN). This network adopts a three-level adaptive feature fusion method, which can adaptively fuse multi-scale features according to the importance of different feature layers and reduce unnecessary intermediate redundant features by utilizing the adaptive feature fusion module (AFFM). Furthermore, we use the MPDIoU loss function as the bounding-box regression loss function, which not only enhances model robustness to noise but also simplifies the calculation process and improves the final detection efficiency. Finally, the proposed MFEFNet was tested on VisDrone and UAVDT datasets, and the mAP0.5 value increased by 2.7% and 2.2%, respectively.
目前,无人飞行器(UAV)已广泛应用于多个领域。由于无人机飞行高度和拍摄角度的随机性,无人机图像通常具有以下特点:小物体多、物体比例变化大、背景复杂。因此,无人机航拍图像中的物体检测是一项极具挑战性的任务。针对这些特点,本文提出了一种基于全局特征聚合和上下文特征提取的新型无人机图像目标检测方法,命名为多尺度特征信息提取与融合网络(MFEFNet)。具体来说,首先,为了更有效地从复杂背景中提取物体的特征信息,我们提出了高效的空间信息提取(SIEM)模块,结合残差连接建立长距离特征依赖关系,通过建立物体周围的上下文特征关系,有效地提取出最有用的特征信息。其次,为了提高特征融合效率,减轻冗余特征融合网络带来的负担,我们提出了全局聚合渐进式特征融合网络(GAFN)。该网络采用三级自适应特征融合方法,可根据不同特征层的重要性自适应地融合多尺度特征,并利用自适应特征融合模块(AFFM)减少不必要的中间冗余特征。此外,我们使用 MPDIoU 损失函数作为边界框回归损失函数,不仅增强了模型对噪声的鲁棒性,还简化了计算过程,提高了最终的检测效率。最后,我们在 VisDrone 和 UAVDT 数据集上测试了所提出的 MFEFNet,mAP0.5 值分别提高了 2.7% 和 2.2%。
{"title":"MFEFNet: A Multi-Scale Feature Information Extraction and Fusion Network for Multi-Scale Object Detection in UAV Aerial Images","authors":"Liming Zhou, Shuai Zhao, Ziye Wan, Yang Liu, Yadi Wang, Xianyu Zuo","doi":"10.3390/drones8050186","DOIUrl":"https://doi.org/10.3390/drones8050186","url":null,"abstract":"Unmanned aerial vehicles (UAVs) are now widely used in many fields. Due to the randomness of UAV flight height and shooting angle, UAV images usually have the following characteristics: many small objects, large changes in object scale, and complex background. Therefore, object detection in UAV aerial images is a very challenging task. To address the challenges posed by these characteristics, this paper proposes a novel UAV image object detection method based on global feature aggregation and context feature extraction named the multi-scale feature information extraction and fusion network (MFEFNet). Specifically, first of all, to extract the feature information of objects more effectively from complex backgrounds, we propose an efficient spatial information extraction (SIEM) module, which combines residual connection to build long-distance feature dependencies and effectively extracts the most useful feature information by building contextual feature relations around objects. Secondly, to improve the feature fusion efficiency and reduce the burden brought by redundant feature fusion networks, we propose a global aggregation progressive feature fusion network (GAFN). This network adopts a three-level adaptive feature fusion method, which can adaptively fuse multi-scale features according to the importance of different feature layers and reduce unnecessary intermediate redundant features by utilizing the adaptive feature fusion module (AFFM). Furthermore, we use the MPDIoU loss function as the bounding-box regression loss function, which not only enhances model robustness to noise but also simplifies the calculation process and improves the final detection efficiency. Finally, the proposed MFEFNet was tested on VisDrone and UAVDT datasets, and the mAP0.5 value increased by 2.7% and 2.2%, respectively.","PeriodicalId":507567,"journal":{"name":"Drones","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140998316","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Intelligent Packet Priority Module for a Network of Unmanned Aerial Vehicles Using Manhattan Long Short-Term Memory 使用曼哈顿长短时记忆的无人机网络智能数据包优先级模块
Pub Date : 2024-05-07 DOI: 10.3390/drones8050183
Dino Budi Prakoso, J. H. Windiatmaja, Agus Mulyanto, Riri Fitri Sari, R. Nordin
Unmanned aerial vehicles (UAVs) are becoming more common in wireless communication networks. Using UAVs can lead to network problems. An issue arises when the UAVs function in a network-access-limited environment with nodes causing interference. This issue could potentially hinder UAV network connectivity. This paper introduces an intelligent packet priority module (IPPM) to minimize network latency. This study analyzed Network Simulator–3 (NS-3) network modules utilizing Manhattan long short-term memory (MaLSTM) for packet classification of critical UAV, ground control station (GCS), or interfering nodes. To minimize network latency and packet delivery ratio (PDR) issues caused by interfering nodes, packets from prioritized nodes are transmitted first. Simulation results and evaluation show that our proposed intelligent packet priority module (IPPM) method outperformed previous approaches. The proposed IPPM based on MaLSTM implementation for the priority packet module led to a lower network delay and a higher packet delivery ratio. The performance of the IPPM averaged 62.2 ms network delay and 0.97 packet delivery ratio (PDR). The MaLSTM peaked at 97.5% accuracy. Upon further evaluation, the stability of LSTM Siamese models was observed to be consistent across diverse similarity functions, including cosine and Euclidean distances.
无人飞行器(UAV)在无线通信网络中越来越常见。使用无人飞行器可能会导致网络问题。当无人飞行器在网络访问受限的环境中运行时,会出现节点干扰问题。这个问题可能会阻碍无人机网络的连接。本文介绍了一种智能数据包优先级模块(IPPM),以尽量减少网络延迟。本研究分析了网络模拟器-3(NS-3)网络模块,利用曼哈顿长短期存储器(MaLSTM)对关键无人机、地面控制站(GCS)或干扰节点进行数据包分类。为了最大限度地减少干扰节点造成的网络延迟和数据包传输率(PDR)问题,优先节点的数据包将首先传输。仿真结果和评估表明,我们提出的智能数据包优先级模块(IPPM)方法优于之前的方法。所提出的基于 MaLSTM 实现优先级数据包模块的 IPPM 降低了网络延迟,提高了数据包传送率。IPPM 的平均网络延迟为 62.2 毫秒,数据包传送率为 0.97。MaLSTM 的峰值准确率为 97.5%。经过进一步评估,发现 LSTM Siamese 模型的稳定性在各种相似性函数(包括余弦距离和欧氏距离)中都是一致的。
{"title":"Intelligent Packet Priority Module for a Network of Unmanned Aerial Vehicles Using Manhattan Long Short-Term Memory","authors":"Dino Budi Prakoso, J. H. Windiatmaja, Agus Mulyanto, Riri Fitri Sari, R. Nordin","doi":"10.3390/drones8050183","DOIUrl":"https://doi.org/10.3390/drones8050183","url":null,"abstract":"Unmanned aerial vehicles (UAVs) are becoming more common in wireless communication networks. Using UAVs can lead to network problems. An issue arises when the UAVs function in a network-access-limited environment with nodes causing interference. This issue could potentially hinder UAV network connectivity. This paper introduces an intelligent packet priority module (IPPM) to minimize network latency. This study analyzed Network Simulator–3 (NS-3) network modules utilizing Manhattan long short-term memory (MaLSTM) for packet classification of critical UAV, ground control station (GCS), or interfering nodes. To minimize network latency and packet delivery ratio (PDR) issues caused by interfering nodes, packets from prioritized nodes are transmitted first. Simulation results and evaluation show that our proposed intelligent packet priority module (IPPM) method outperformed previous approaches. The proposed IPPM based on MaLSTM implementation for the priority packet module led to a lower network delay and a higher packet delivery ratio. The performance of the IPPM averaged 62.2 ms network delay and 0.97 packet delivery ratio (PDR). The MaLSTM peaked at 97.5% accuracy. Upon further evaluation, the stability of LSTM Siamese models was observed to be consistent across diverse similarity functions, including cosine and Euclidean distances.","PeriodicalId":507567,"journal":{"name":"Drones","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141002016","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
UAV-Mounted RIS-Aided Mobile Edge Computing System: A DDQN-Based Optimization Approach 无人机安装的 RIS 辅助移动边缘计算系统:基于 DDQN 的优化方法
Pub Date : 2024-05-07 DOI: 10.3390/drones8050184
Min Wu, Shibing Zhu, Changqing Li, Jiao Zhu, Yudi Chen, Xiangyu Liu, Rui Liu
Unmanned aerial vehicles (UAVs) and reconfigurable intelligent surfaces (RISs) are increasingly employed in mobile edge computing (MEC) systems to flexibly modify the signal transmission environment. This is achieved through the active manipulation of the wireless channel facilitated by the mobile deployment of UAVs and the intelligent reflection of signals by RISs. However, these technologies are subject to inherent limitations such as the restricted range of UAVs and limited RIS coverage, which hinder their broader application. The integration of UAVs and RISs into UAV–RIS schemes presents a promising approach to surmounting these limitations by leveraging the strengths of both technologies. Motivated by the above observations, we contemplate a novel UAV–RIS-aided MEC system, wherein UAV–RIS plays a pivotal role in facilitating communication between terrestrial vehicle users and MEC servers. To address this challenging non-convex problem, we propose an energy-constrained approach to maximize the system’s energy efficiency based on a double-deep Q-network (DDQN), which is employed to realize joint control of the UAVs, passive beamforming, and resource allocation for MEC. Numerical results demonstrate that the proposed optimization scheme significantly enhances the system efficiency of the UAV–RIS-aided time division multiple access (TDMA) network.
移动边缘计算(MEC)系统越来越多地采用无人飞行器(UAV)和可重构智能表面(RIS)来灵活改变信号传输环境。无人飞行器的移动部署和可重构智能表面(RIS)对信号的智能反射促进了对无线信道的主动操控,从而实现了这一目标。然而,这些技术受到一些固有的限制,如无人机的航程受限和 RIS 的覆盖范围有限,这阻碍了它们的广泛应用。将无人机和区域一体化系统集成到无人机-区域一体化系统方案中,是利用这两种技术的优势来克服这些局限性的一种有前途的方法。受上述观点的启发,我们设想了一种新型的无人机-RIS 辅助 MEC 系统,其中无人机-RIS 在促进地面车辆用户与 MEC 服务器之间的通信方面发挥着关键作用。为解决这一具有挑战性的非凸问题,我们提出了一种基于双深 Q 网络(DDQN)的能量约束方法,以最大限度地提高系统的能效,并利用该网络实现无人机的联合控制、无源波束成形和 MEC 的资源分配。数值结果表明,所提出的优化方案显著提高了无人机-RIS 辅助时分多址(TDMA)网络的系统效率。
{"title":"UAV-Mounted RIS-Aided Mobile Edge Computing System: A DDQN-Based Optimization Approach","authors":"Min Wu, Shibing Zhu, Changqing Li, Jiao Zhu, Yudi Chen, Xiangyu Liu, Rui Liu","doi":"10.3390/drones8050184","DOIUrl":"https://doi.org/10.3390/drones8050184","url":null,"abstract":"Unmanned aerial vehicles (UAVs) and reconfigurable intelligent surfaces (RISs) are increasingly employed in mobile edge computing (MEC) systems to flexibly modify the signal transmission environment. This is achieved through the active manipulation of the wireless channel facilitated by the mobile deployment of UAVs and the intelligent reflection of signals by RISs. However, these technologies are subject to inherent limitations such as the restricted range of UAVs and limited RIS coverage, which hinder their broader application. The integration of UAVs and RISs into UAV–RIS schemes presents a promising approach to surmounting these limitations by leveraging the strengths of both technologies. Motivated by the above observations, we contemplate a novel UAV–RIS-aided MEC system, wherein UAV–RIS plays a pivotal role in facilitating communication between terrestrial vehicle users and MEC servers. To address this challenging non-convex problem, we propose an energy-constrained approach to maximize the system’s energy efficiency based on a double-deep Q-network (DDQN), which is employed to realize joint control of the UAVs, passive beamforming, and resource allocation for MEC. Numerical results demonstrate that the proposed optimization scheme significantly enhances the system efficiency of the UAV–RIS-aided time division multiple access (TDMA) network.","PeriodicalId":507567,"journal":{"name":"Drones","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141005143","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Vision-Guided Tracking and Emergency Landing for UAVs on Moving Targets 视觉引导无人机对移动目标进行跟踪和紧急降落
Pub Date : 2024-05-03 DOI: 10.3390/drones8050182
Yisak Debele, Hayoung Shi, Assefinew Wondosen, H. Warku, T. Ku, Beom-Soo Kang
This paper presents a vision-based adaptive tracking and landing method for multirotor Unmanned Aerial Vehicles (UAVs), designed for safe recovery amid propulsion system failures that reduce maneuverability and responsiveness. The method addresses challenges posed by external disturbances such as wind and agile target movements, specifically, by considering maneuverability and control limitations caused by propulsion system failures. Building on our previous research in actuator fault detection and tolerance, our approach employs a modified adaptive pure pursuit guidance technique with an extra adaptation parameter to account for reduced maneuverability, thus ensuring safe tracking of moving objects. Additionally, we present an adaptive landing strategy that adapts to tracking deviations and minimizes off-target landings caused by lateral tracking errors and delayed responses, using a lateral offset-dependent vertical velocity control. Our system employs vision-based tag detection to ascertain the position of the Unmanned Ground Vehicle (UGV) in relation to the UAV. We implemented this system in a mid-mission emergency landing scenario, which includes actuator health monitoring of emergency landings. Extensive testing and simulations demonstrate the effectiveness of our approach, significantly advancing the development of safe tracking and emergency landing methods for UAVs with compromised control authority due to actuator failures.
本文介绍了一种基于视觉的多旋翼无人飞行器(UAV)自适应跟踪和着陆方法,该方法设计用于在推进系统故障导致机动性和响应性降低的情况下进行安全恢复。该方法通过考虑推进系统故障造成的机动性和控制限制,解决了风等外部干扰和目标敏捷运动带来的挑战。我们的方法以先前在致动器故障检测和容错方面的研究为基础,采用了一种改进的自适应纯追逐制导技术,并增加了一个适应参数,以考虑到机动性的降低,从而确保对移动物体的安全跟踪。此外,我们还提出了一种自适应着陆策略,该策略可适应跟踪偏差,并通过横向偏移垂直速度控制,最大限度地减少横向跟踪误差和延迟响应导致的脱靶着陆。我们的系统采用基于视觉的标签检测来确定无人地面飞行器(UGV)相对于无人飞行器的位置。我们在任务中期紧急着陆场景中实施了这一系统,其中包括紧急着陆的致动器健康监测。广泛的测试和模拟证明了我们方法的有效性,极大地推动了因致动器故障导致控制权受损的无人飞行器的安全跟踪和紧急着陆方法的发展。
{"title":"Vision-Guided Tracking and Emergency Landing for UAVs on Moving Targets","authors":"Yisak Debele, Hayoung Shi, Assefinew Wondosen, H. Warku, T. Ku, Beom-Soo Kang","doi":"10.3390/drones8050182","DOIUrl":"https://doi.org/10.3390/drones8050182","url":null,"abstract":"This paper presents a vision-based adaptive tracking and landing method for multirotor Unmanned Aerial Vehicles (UAVs), designed for safe recovery amid propulsion system failures that reduce maneuverability and responsiveness. The method addresses challenges posed by external disturbances such as wind and agile target movements, specifically, by considering maneuverability and control limitations caused by propulsion system failures. Building on our previous research in actuator fault detection and tolerance, our approach employs a modified adaptive pure pursuit guidance technique with an extra adaptation parameter to account for reduced maneuverability, thus ensuring safe tracking of moving objects. Additionally, we present an adaptive landing strategy that adapts to tracking deviations and minimizes off-target landings caused by lateral tracking errors and delayed responses, using a lateral offset-dependent vertical velocity control. Our system employs vision-based tag detection to ascertain the position of the Unmanned Ground Vehicle (UGV) in relation to the UAV. We implemented this system in a mid-mission emergency landing scenario, which includes actuator health monitoring of emergency landings. Extensive testing and simulations demonstrate the effectiveness of our approach, significantly advancing the development of safe tracking and emergency landing methods for UAVs with compromised control authority due to actuator failures.","PeriodicalId":507567,"journal":{"name":"Drones","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-05-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141014659","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fine-Grained Feature Perception for Unmanned Aerial Vehicle Target Detection Algorithm 用于无人机目标检测算法的细粒度特征感知
Pub Date : 2024-05-03 DOI: 10.3390/drones8050181
Shi Liu, Meng Zhu, Rui Tao, Honge Ren
Unmanned aerial vehicle (UAV) aerial images often present challenges such as small target sizes, high target density, varied shooting angles, and dynamic poses. Existing target detection algorithms exhibit a noticeable performance decline when confronted with UAV aerial images compared to general scenes. This paper proposes an outstanding small target detection algorithm for UAVs, named Fine-Grained Feature Perception YOLOv8s-P2 (FGFP-YOLOv8s-P2), based on YOLOv8s-P2 architecture. We specialize in improving inspection accuracy while meeting real-time inspection requirements. First, we enhance the targets’ pixel information by utilizing slice-assisted training and inference techniques, thereby reducing missed detections. Then, we propose a feature extraction module with deformable convolutions. Decoupling the learning process of offset and modulation scalar enables better adaptation to variations in the size and shape of diverse targets. In addition, we introduce a large kernel spatial pyramid pooling module. By cascading convolutions, we leverage the advantages of large kernels to flexibly adjust the model’s attention to various regions of high-level feature maps, better adapting to complex visual scenes and circumventing the cost drawbacks associated with large kernels. To match the excellent real-time detection performance of the baseline model, we propose an improved Random FasterNet Block. This block introduces randomness during convolution and captures spatial features of non-linear transformation channels, enriching feature representations and enhancing model efficiency. Extensive experiments and comprehensive evaluations on the VisDrone2019 and DOTA-v1.0 datasets demonstrate the effectiveness of FGFP-YOLOv8s-P2. This achievement provides robust technical support for efficient small target detection by UAVs in complex scenarios.
无人飞行器(UAV)航拍图像通常会面临目标尺寸小、目标密度高、拍摄角度多变和动态姿态等挑战。与一般场景相比,现有的目标检测算法在面对无人机航拍图像时表现出明显的性能下降。本文基于 YOLOv8s-P2 架构,提出了一种优秀的无人机小目标检测算法,命名为细粒度特征感知 YOLOv8s-P2 (FGFP-YOLOv8s-P2)。我们专注于在满足实时检测要求的同时提高检测精度。首先,我们利用切片辅助训练和推理技术来增强目标的像素信息,从而减少漏检。然后,我们提出了采用可变形卷积的特征提取模块。将偏移和调制标量的学习过程解耦,可以更好地适应不同目标的大小和形状变化。此外,我们还引入了一个大内核空间金字塔池化模块。通过级联卷积,我们利用大内核的优势,灵活调整模型对高层特征图各区域的注意力,从而更好地适应复杂的视觉场景,并规避了大内核带来的成本弊端。为了与基线模型出色的实时检测性能相匹配,我们提出了改进的随机 FasterNet 块。该块在卷积过程中引入随机性,捕捉非线性变换通道的空间特征,丰富了特征表征,提高了模型效率。在 VisDrone2019 和 DOTA-v1.0 数据集上进行的广泛实验和综合评估证明了 FGFP-YOLOv8s-P2 的有效性。这一成果为无人机在复杂场景中高效探测小型目标提供了强有力的技术支持。
{"title":"Fine-Grained Feature Perception for Unmanned Aerial Vehicle Target Detection Algorithm","authors":"Shi Liu, Meng Zhu, Rui Tao, Honge Ren","doi":"10.3390/drones8050181","DOIUrl":"https://doi.org/10.3390/drones8050181","url":null,"abstract":"Unmanned aerial vehicle (UAV) aerial images often present challenges such as small target sizes, high target density, varied shooting angles, and dynamic poses. Existing target detection algorithms exhibit a noticeable performance decline when confronted with UAV aerial images compared to general scenes. This paper proposes an outstanding small target detection algorithm for UAVs, named Fine-Grained Feature Perception YOLOv8s-P2 (FGFP-YOLOv8s-P2), based on YOLOv8s-P2 architecture. We specialize in improving inspection accuracy while meeting real-time inspection requirements. First, we enhance the targets’ pixel information by utilizing slice-assisted training and inference techniques, thereby reducing missed detections. Then, we propose a feature extraction module with deformable convolutions. Decoupling the learning process of offset and modulation scalar enables better adaptation to variations in the size and shape of diverse targets. In addition, we introduce a large kernel spatial pyramid pooling module. By cascading convolutions, we leverage the advantages of large kernels to flexibly adjust the model’s attention to various regions of high-level feature maps, better adapting to complex visual scenes and circumventing the cost drawbacks associated with large kernels. To match the excellent real-time detection performance of the baseline model, we propose an improved Random FasterNet Block. This block introduces randomness during convolution and captures spatial features of non-linear transformation channels, enriching feature representations and enhancing model efficiency. Extensive experiments and comprehensive evaluations on the VisDrone2019 and DOTA-v1.0 datasets demonstrate the effectiveness of FGFP-YOLOv8s-P2. This achievement provides robust technical support for efficient small target detection by UAVs in complex scenarios.","PeriodicalId":507567,"journal":{"name":"Drones","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-05-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141016244","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dual-driven Learning-Based Multiple-Input Multiple-Output Signal Detection Unmanned Aerial Vehicle Air-to-Ground Communications 基于双驱动学习的多输入多输出信号检测 无人机空对地通信
Pub Date : 2024-05-02 DOI: 10.3390/drones8050180
Haihan Li , Yongming He , Shuntian Zheng , Fan Zhou , Hongwen Yang 
Unmanned aerial vehicle (UAV) air-to-ground (AG) communication plays a critical role in the evolving space–air–ground integrated network of the upcoming sixth-generation cellular network (6G). The integration of massive multiple-input multiple-output (MIMO) systems has become essential for ensuring optimal performing communication technologies. This article presents a novel dual-driven learning-based network for millimeter-wave (mm-wave) massive MIMO symbol detection of UAV AG communications. Our main contribution is that the proposed approach combines a data-driven symbol-correction network with a model-driven orthogonal approximate message passing network (OAMP-Net). Through joint training, the dual-driven network reduces symbol detection errors propagated through each iteration of the model-driven OAMP-Net. The numerical results demonstrate the superiority of the dual-driven detector over the conventional minimum mean square error (MMSE), orthogonal approximate message passing (OAMP), and OAMP-Net detectors at various noise powers and channel estimation errors. The dual-driven MIMO detector exhibits a 2–3 dB lower signal-to-noise ratio (SNR) requirement compared to the MMSE and OAMP-Net detectors to achieve a bit error rate (BER) of 1×10−2 when the channel estimation error is −30 dB. Moreover, the dual-driven MIMO detector exhibits an increased tolerance to channel estimation errors by 2–3 dB to achieve a BER of 1×10−3.
无人飞行器(UAV)空对地(AG)通信在即将到来的第六代蜂窝网络(6G)不断发展的空-空-地一体化网络中发挥着至关重要的作用。大规模多输入多输出(MIMO)系统的集成对于确保通信技术的最佳性能至关重要。本文针对无人机 AG 通信的毫米波(mm-wave)大规模 MIMO 符号检测提出了一种基于双驱动学习的新型网络。我们的主要贡献在于所提出的方法将数据驱动的符号校正网络与模型驱动的正交近似消息传递网络(OAMP-Net)相结合。通过联合训练,双驱动网络减少了通过模型驱动 OAMP-Net 的每次迭代传播的符号检测错误。数值结果表明,在各种噪声功率和信道估计误差条件下,双驱动检测器优于传统的最小均方误差 (MMSE)、正交近似消息传递 (OAMP) 和 OAMP-Net 检测器。与 MMSE 和 OAMP-Net 检测器相比,双驱动 MIMO 检测器的信噪比 (SNR) 要求低 2-3 dB,当信道估计误差为 -30 dB 时,误码率 (BER) 可达到 1×10-2。此外,双驱动 MIMO 检测器对信道估计误差的容忍度提高了 2-3 dB,误码率达到 1×10-3。
{"title":"Dual-driven Learning-Based Multiple-Input Multiple-Output Signal Detection Unmanned Aerial Vehicle Air-to-Ground Communications","authors":"Haihan Li , Yongming He , Shuntian Zheng , Fan Zhou , Hongwen Yang ","doi":"10.3390/drones8050180","DOIUrl":"https://doi.org/10.3390/drones8050180","url":null,"abstract":"Unmanned aerial vehicle (UAV) air-to-ground (AG) communication plays a critical role in the evolving space–air–ground integrated network of the upcoming sixth-generation cellular network (6G). The integration of massive multiple-input multiple-output (MIMO) systems has become essential for ensuring optimal performing communication technologies. This article presents a novel dual-driven learning-based network for millimeter-wave (mm-wave) massive MIMO symbol detection of UAV AG communications. Our main contribution is that the proposed approach combines a data-driven symbol-correction network with a model-driven orthogonal approximate message passing network (OAMP-Net). Through joint training, the dual-driven network reduces symbol detection errors propagated through each iteration of the model-driven OAMP-Net. The numerical results demonstrate the superiority of the dual-driven detector over the conventional minimum mean square error (MMSE), orthogonal approximate message passing (OAMP), and OAMP-Net detectors at various noise powers and channel estimation errors. The dual-driven MIMO detector exhibits a 2–3 dB lower signal-to-noise ratio (SNR) requirement compared to the MMSE and OAMP-Net detectors to achieve a bit error rate (BER) of 1×10−2 when the channel estimation error is −30 dB. Moreover, the dual-driven MIMO detector exhibits an increased tolerance to channel estimation errors by 2–3 dB to achieve a BER of 1×10−3.","PeriodicalId":507567,"journal":{"name":"Drones","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-05-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141021202","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Model-Free RBF Neural Network Intelligent-PID Control Applying Adaptive Robust Term for Quadrotor System 为四旋翼飞行器系统应用自适应鲁棒术语的无模型 RBF 神经网络智能 PID 控制
Pub Date : 2024-05-01 DOI: 10.3390/drones8050179
Sung-Jae Kim, Jinho Suh
This paper proposes a quadrotor system control scheme using an intelligent–proportional–integral–differential control (I-PID)-based controller augmented with a radial basis neural network (RBF neural network) and the proposed adaptive robust term. The I-PID controller, similar to the widely utilized PID controller in quadrotor systems, demonstrates notable robustness. To enhance this robustness further, the time-delay estimation error was compensated with an RBF neural network. Additionally, an adaptive robust term was proposed to address the shortcomings of the neural network system, thereby constructing a more robust controller. This supplementary control input integrated an adaptation term to address significant signal changes and was amalgamated with a reverse saturation filter to remove unnecessary control input during a steady state. The adaptive law of the proposed controller was designed based on Lyapunov stability to satisfy control system stability. To verify the control system, simulations were conducted on a quadrotor system maneuvering along a spiral path in a disturbed environment. The simulation results demonstrate that the proposed controller achieves high tracking performance across all six axes. Therefore, the controller proposed in this paper can be configured similarly to the previous PID controller and shows satisfactory performance.
本文提出了一种四旋翼飞行器系统控制方案,该方案采用了基于智能比例积分微分控制(I-PID)的控制器,并增加了径向基神经网络(RBF 神经网络)和建议的自适应鲁棒项。I-PID 控制器与四旋翼飞行器系统中广泛使用的 PID 控制器类似,具有显著的鲁棒性。为了进一步增强鲁棒性,使用 RBF 神经网络对时延估计误差进行了补偿。此外,还提出了一个自适应稳健项,以解决神经网络系统的不足,从而构建一个更稳健的控制器。这种补充控制输入集成了一个自适应项,以应对显著的信号变化,并与反向饱和滤波器相结合,以消除稳定状态下不必要的控制输入。拟议控制器的自适应法则是基于 Lyapunov 稳定性设计的,以满足控制系统的稳定性。为了验证控制系统,对在干扰环境中沿螺旋路径机动的四旋翼系统进行了仿真。仿真结果表明,所提出的控制器在所有六个轴上都实现了较高的跟踪性能。因此,本文提出的控制器可以与之前的 PID 控制器进行类似配置,并显示出令人满意的性能。
{"title":"Model-Free RBF Neural Network Intelligent-PID Control Applying Adaptive Robust Term for Quadrotor System","authors":"Sung-Jae Kim, Jinho Suh","doi":"10.3390/drones8050179","DOIUrl":"https://doi.org/10.3390/drones8050179","url":null,"abstract":"This paper proposes a quadrotor system control scheme using an intelligent–proportional–integral–differential control (I-PID)-based controller augmented with a radial basis neural network (RBF neural network) and the proposed adaptive robust term. The I-PID controller, similar to the widely utilized PID controller in quadrotor systems, demonstrates notable robustness. To enhance this robustness further, the time-delay estimation error was compensated with an RBF neural network. Additionally, an adaptive robust term was proposed to address the shortcomings of the neural network system, thereby constructing a more robust controller. This supplementary control input integrated an adaptation term to address significant signal changes and was amalgamated with a reverse saturation filter to remove unnecessary control input during a steady state. The adaptive law of the proposed controller was designed based on Lyapunov stability to satisfy control system stability. To verify the control system, simulations were conducted on a quadrotor system maneuvering along a spiral path in a disturbed environment. The simulation results demonstrate that the proposed controller achieves high tracking performance across all six axes. Therefore, the controller proposed in this paper can be configured similarly to the previous PID controller and shows satisfactory performance.","PeriodicalId":507567,"journal":{"name":"Drones","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141053761","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing UAV Aerial Docking: A Hybrid Approach Combining Offline and Online Reinforcement Learning 增强无人机空中对接:结合离线和在线强化学习的混合方法
Pub Date : 2024-04-24 DOI: 10.3390/drones8050168
Yuting Feng, Tao Yang, Yushu Yu
In our study, we explore the task of performing docking maneuvers between two unmanned aerial vehicles (UAVs) using a combination of offline and online reinforcement learning (RL) methods. This task requires a UAV to accomplish external docking while maintaining stable flight control, representing two distinct types of objectives at the task execution level. Direct online RL training could lead to catastrophic forgetting, resulting in training failure. To overcome these challenges, we design a rule-based expert controller and accumulate an extensive dataset. Based on this, we concurrently design a series of rewards and train a guiding policy through offline RL. Then, we conduct comparative verification on different RL methods, ultimately selecting online RL to fine-tune the model trained offline. This strategy effectively combines the efficiency of offline RL with the exploratory capabilities of online RL. Our approach improves the success rate of the UAV’s aerial docking task, increasing it from 40% under the expert policy to 95%.
在我们的研究中,我们结合离线和在线强化学习(RL)方法,探讨了两个无人驾驶飞行器(UAV)之间执行对接操作的任务。这项任务要求无人飞行器在保持稳定飞行控制的同时完成外部对接,这代表了任务执行层面的两种不同类型的目标。直接在线 RL 训练可能会导致灾难性遗忘,从而导致训练失败。为了克服这些挑战,我们设计了基于规则的专家控制器,并积累了大量数据集。在此基础上,我们同时设计了一系列奖励,并通过离线 RL 训练指导策略。然后,我们对不同的 RL 方法进行比较验证,最终选择在线 RL 来微调离线训练的模型。这种策略有效地结合了离线 RL 的效率和在线 RL 的探索能力。我们的方法提高了无人机空中对接任务的成功率,从专家策略下的 40% 提高到 95%。
{"title":"Enhancing UAV Aerial Docking: A Hybrid Approach Combining Offline and Online Reinforcement Learning","authors":"Yuting Feng, Tao Yang, Yushu Yu","doi":"10.3390/drones8050168","DOIUrl":"https://doi.org/10.3390/drones8050168","url":null,"abstract":"In our study, we explore the task of performing docking maneuvers between two unmanned aerial vehicles (UAVs) using a combination of offline and online reinforcement learning (RL) methods. This task requires a UAV to accomplish external docking while maintaining stable flight control, representing two distinct types of objectives at the task execution level. Direct online RL training could lead to catastrophic forgetting, resulting in training failure. To overcome these challenges, we design a rule-based expert controller and accumulate an extensive dataset. Based on this, we concurrently design a series of rewards and train a guiding policy through offline RL. Then, we conduct comparative verification on different RL methods, ultimately selecting online RL to fine-tune the model trained offline. This strategy effectively combines the efficiency of offline RL with the exploratory capabilities of online RL. Our approach improves the success rate of the UAV’s aerial docking task, increasing it from 40% under the expert policy to 95%.","PeriodicalId":507567,"journal":{"name":"Drones","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140661851","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Early Drought Detection in Maize Using UAV Images and YOLOv8+ 利用无人机图像和 YOLOv8+ 进行玉米早期干旱检测
Pub Date : 2024-04-24 DOI: 10.3390/drones8050170
Shanwei Niu, Zhigang Nie, Guang Li, Wenyu Zhu
The escalating global climate change significantly impacts the yield and quality of maize, a vital staple crop worldwide, especially during seedling stage droughts. Traditional detection methods are limited by their single-scenario approach, requiring substantial human labor and time, and lack accuracy in the real-time monitoring and precise assessment of drought severity. In this study, a novel early drought detection method for maize based on unmanned aerial vehicle (UAV) images and Yolov8+ is proposed. In the Backbone section, the C2F-Conv module is adopted to reduce model parameters and deployment costs, while incorporating the CA attention mechanism module to effectively capture tiny feature information in the images. The Neck section utilizes the BiFPN fusion architecture and spatial attention mechanism to enhance the model’s ability to recognize small and occluded targets. The Head section introduces an additional 10 × 10 output, integrates loss functions, and enhances accuracy by 1.46%, reduces training time by 30.2%, and improves robustness. The experimental results demonstrate that the improved Yolov8+ model achieves precision and recall rates of approximately 90.6% and 88.7%, respectively. The mAP@50 and mAP@50:95 reach 89.16% and 71.14%, respectively, representing respective increases of 3.9% and 3.3% compared to the original Yolov8. The UAV image detection speed of the model is up to 24.63 ms, with a model size of 13.76 MB, optimized by 31.6% and 28.8% compared to the original model, respectively. In comparison with the Yolov8, Yolov7, and Yolo5s models, the proposed method exhibits varying degrees of superiority in mAP@50, mAP@50:95, and other metrics, utilizing drone imagery and deep learning techniques to truly propel agricultural modernization.
全球气候变化不断加剧,严重影响了玉米这一全球重要主粮作物的产量和质量,尤其是在苗期干旱期间。传统的检测方法受限于其单一场景的方法,需要大量的人力和时间,在实时监测和精确评估干旱严重程度方面缺乏准确性。本研究提出了一种基于无人机图像和 Yolov8+ 的新型玉米早期干旱检测方法。在骨干部分,采用了 C2F-Conv 模块以减少模型参数和部署成本,同时结合 CA 注意机制模块以有效捕捉图像中的微小特征信息。颈部利用 BiFPN 融合架构和空间注意机制来增强模型识别小目标和遮挡目标的能力。头部部分引入了额外的 10 × 10 输出,整合了损失函数,提高了 1.46% 的准确率,减少了 30.2% 的训练时间,并提高了鲁棒性。实验结果表明,改进后的 Yolov8+ 模型的精确率和召回率分别达到约 90.6% 和 88.7%。mAP@50 和 mAP@50:95 分别达到 89.16% 和 71.14%,与原始 Yolov8 相比分别提高了 3.9% 和 3.3%。模型的无人机图像检测速度可达 24.63 毫秒,模型大小为 13.76 MB,与原始模型相比分别优化了 31.6% 和 28.8%。与Yolov8、Yolov7和Yolo5s模型相比,提出的方法在mAP@50、mAP@50:95等指标上表现出不同程度的优越性,利用无人机图像和深度学习技术真正推动了农业现代化。
{"title":"Early Drought Detection in Maize Using UAV Images and YOLOv8+","authors":"Shanwei Niu, Zhigang Nie, Guang Li, Wenyu Zhu","doi":"10.3390/drones8050170","DOIUrl":"https://doi.org/10.3390/drones8050170","url":null,"abstract":"The escalating global climate change significantly impacts the yield and quality of maize, a vital staple crop worldwide, especially during seedling stage droughts. Traditional detection methods are limited by their single-scenario approach, requiring substantial human labor and time, and lack accuracy in the real-time monitoring and precise assessment of drought severity. In this study, a novel early drought detection method for maize based on unmanned aerial vehicle (UAV) images and Yolov8+ is proposed. In the Backbone section, the C2F-Conv module is adopted to reduce model parameters and deployment costs, while incorporating the CA attention mechanism module to effectively capture tiny feature information in the images. The Neck section utilizes the BiFPN fusion architecture and spatial attention mechanism to enhance the model’s ability to recognize small and occluded targets. The Head section introduces an additional 10 × 10 output, integrates loss functions, and enhances accuracy by 1.46%, reduces training time by 30.2%, and improves robustness. The experimental results demonstrate that the improved Yolov8+ model achieves precision and recall rates of approximately 90.6% and 88.7%, respectively. The mAP@50 and mAP@50:95 reach 89.16% and 71.14%, respectively, representing respective increases of 3.9% and 3.3% compared to the original Yolov8. The UAV image detection speed of the model is up to 24.63 ms, with a model size of 13.76 MB, optimized by 31.6% and 28.8% compared to the original model, respectively. In comparison with the Yolov8, Yolov7, and Yolo5s models, the proposed method exhibits varying degrees of superiority in mAP@50, mAP@50:95, and other metrics, utilizing drone imagery and deep learning techniques to truly propel agricultural modernization.","PeriodicalId":507567,"journal":{"name":"Drones","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140658819","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Control-Theoretic Spatio-Temporal Model for Wildfire Smoke Propagation Using UAV-Based Air Pollutant Measurements 利用基于无人机的空气污染物测量建立野火烟雾传播的时空控制理论模型
Pub Date : 2024-04-24 DOI: 10.3390/drones8050169
Prabhash Ragbir, A. Kaduwela, Xiaodong Lan, Adam Watts, Zhaodan Kong
Wildfires have the potential to cause severe damage to vegetation, property and most importantly, human life. In order to minimize these negative impacts, it is crucial that wildfires are detected at the earliest possible stages. A potential solution for early wildfire detection is to utilize unmanned aerial vehicles (UAVs) that are capable of tracking the chemical concentration gradient of smoke emitted by wildfires. A spatiotemporal model of wildfire smoke plume dynamics can allow for efficient tracking of the chemicals by utilizing both real-time information from sensors as well as future information from the model predictions. This study investigates a spatiotemporal modeling approach based on subspace identification (SID) to develop a data-driven smoke plume dynamics model for the purposes of early wildfire detection. The model was learned using CO2 concentration data which were collected using an air quality sensor package onboard a UAV during two prescribed burn experiments. Our model was evaluated by comparing the predicted values to the measured values at random locations and showed mean errors of 6.782 ppm and 30.01 ppm from the two experiments. Additionally, our model was shown to outperform the commonly used Gaussian puff model (GPM) which showed mean errors of 25.799 ppm and 104.492 ppm, respectively.
野火有可能对植被、财产,最重要的是对人的生命造成严重破坏。为了最大限度地减少这些负面影响,尽早发现野火至关重要。早期野火探测的一个潜在解决方案是利用能够跟踪野火释放的烟雾化学浓度梯度的无人驾驶飞行器(UAV)。野火烟羽动态时空模型可以利用传感器提供的实时信息和模型预测的未来信息,对化学物质进行有效追踪。本研究探讨了一种基于子空间识别(SID)的时空建模方法,以开发一种数据驱动的烟羽动力学模型,用于早期野火探测。该模型使用二氧化碳浓度数据进行学习,这些数据是在两次规定燃烧实验中使用无人机上的空气质量传感器包收集的。通过比较预测值和随机位置的测量值,对我们的模型进行了评估,结果显示两次实验的平均误差分别为 6.782 ppm 和 30.01 ppm。此外,我们的模型还优于常用的高斯粉尘模型(GPM),后者的平均误差分别为 25.799 ppm 和 104.492 ppm。
{"title":"A Control-Theoretic Spatio-Temporal Model for Wildfire Smoke Propagation Using UAV-Based Air Pollutant Measurements","authors":"Prabhash Ragbir, A. Kaduwela, Xiaodong Lan, Adam Watts, Zhaodan Kong","doi":"10.3390/drones8050169","DOIUrl":"https://doi.org/10.3390/drones8050169","url":null,"abstract":"Wildfires have the potential to cause severe damage to vegetation, property and most importantly, human life. In order to minimize these negative impacts, it is crucial that wildfires are detected at the earliest possible stages. A potential solution for early wildfire detection is to utilize unmanned aerial vehicles (UAVs) that are capable of tracking the chemical concentration gradient of smoke emitted by wildfires. A spatiotemporal model of wildfire smoke plume dynamics can allow for efficient tracking of the chemicals by utilizing both real-time information from sensors as well as future information from the model predictions. This study investigates a spatiotemporal modeling approach based on subspace identification (SID) to develop a data-driven smoke plume dynamics model for the purposes of early wildfire detection. The model was learned using CO2 concentration data which were collected using an air quality sensor package onboard a UAV during two prescribed burn experiments. Our model was evaluated by comparing the predicted values to the measured values at random locations and showed mean errors of 6.782 ppm and 30.01 ppm from the two experiments. Additionally, our model was shown to outperform the commonly used Gaussian puff model (GPM) which showed mean errors of 25.799 ppm and 104.492 ppm, respectively.","PeriodicalId":507567,"journal":{"name":"Drones","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140659772","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Drones
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1