首页 > 最新文献

Drones最新文献

英文 中文
Analytical Framework for Sensing Requirements Definition in Non-Cooperative UAS Sense and Avoid 非合作UAS感知需求定义分析框架与规避
2区 地球科学 Q1 REMOTE SENSING Pub Date : 2023-10-03 DOI: 10.3390/drones7100621
Giancarmine Fasano, Roberto Opromolla
This paper provides an analytical framework to address the definition of sensing requirements in non-cooperative UAS sense and avoid. The generality of the approach makes it useful for the exploration of sensor design and selection trade-offs, for the definition of tailored and adaptive sensing strategies, and for the evaluation of the potential of given sensing architectures, also concerning their interface to airspace rules and traffic characteristics. The framework comprises a set of analytical relations covering the following technical aspects: field of view and surveillance rate requirements in azimuth and elevation; the link between sensing accuracy and closest point of approach estimates, expressed though approximated derivatives valid in near-collision conditions; the diverse (but interconnected) effects of sensing accuracy and detection range on the probabilities of missed and false conflict detections. A key idea consists of focusing on a specific target time to closest point of approach at obstacle declaration as the key driver for sensing system design and tuning, which allows accounting for the variability of conflict conditions within the aircraft field of regard. Numerical analyses complement the analytical developments to demonstrate their statistical consistency and to show quantitative examples of the variation of sensing performance as a function of the conflict geometry, as well as highlighting potential implications of the derived concepts. The developed framework can potentially be used to support holistic approaches and evaluations in different scenarios, including the very low-altitude urban airspace.
本文提供了一个分析框架来解决非合作无人机感知需求的定义和避免问题。该方法的通用性使其有助于探索传感器设计和选择权衡,定义定制和自适应传感策略,以及评估给定传感体系结构的潜力,以及它们与空域规则和交通特征的接口。该框架包括一套分析关系,涵盖下列技术方面:方位角和仰角的视场和监视速率要求;感知精度和最接近点估计之间的联系,通过在近碰撞条件下有效的近似导数表示;传感精度和检测距离对冲突检测漏检和误检概率的不同(但相互关联的)影响。一个关键的想法是,将关注障碍物声明时距离最近的接近点的特定目标时间作为传感系统设计和调整的关键驱动因素,这允许考虑飞机领域内冲突条件的可变性。数值分析补充了分析发展,以证明其统计一致性,并展示了作为冲突几何函数的传感性能变化的定量例子,并突出了派生概念的潜在影响。开发的框架可用于支持不同情景下的整体方法和评估,包括极低空城市空域。
{"title":"Analytical Framework for Sensing Requirements Definition in Non-Cooperative UAS Sense and Avoid","authors":"Giancarmine Fasano, Roberto Opromolla","doi":"10.3390/drones7100621","DOIUrl":"https://doi.org/10.3390/drones7100621","url":null,"abstract":"This paper provides an analytical framework to address the definition of sensing requirements in non-cooperative UAS sense and avoid. The generality of the approach makes it useful for the exploration of sensor design and selection trade-offs, for the definition of tailored and adaptive sensing strategies, and for the evaluation of the potential of given sensing architectures, also concerning their interface to airspace rules and traffic characteristics. The framework comprises a set of analytical relations covering the following technical aspects: field of view and surveillance rate requirements in azimuth and elevation; the link between sensing accuracy and closest point of approach estimates, expressed though approximated derivatives valid in near-collision conditions; the diverse (but interconnected) effects of sensing accuracy and detection range on the probabilities of missed and false conflict detections. A key idea consists of focusing on a specific target time to closest point of approach at obstacle declaration as the key driver for sensing system design and tuning, which allows accounting for the variability of conflict conditions within the aircraft field of regard. Numerical analyses complement the analytical developments to demonstrate their statistical consistency and to show quantitative examples of the variation of sensing performance as a function of the conflict geometry, as well as highlighting potential implications of the derived concepts. The developed framework can potentially be used to support holistic approaches and evaluations in different scenarios, including the very low-altitude urban airspace.","PeriodicalId":36448,"journal":{"name":"Drones","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135743661","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Real-Time Object Detection Based on UAV Remote Sensing: A Systematic Literature Review 基于无人机遥感的实时目标检测:系统文献综述
2区 地球科学 Q1 REMOTE SENSING Pub Date : 2023-10-03 DOI: 10.3390/drones7100620
Zhen Cao, Lammert Kooistra, Wensheng Wang, Leifeng Guo, João Valente
Real-time object detection based on UAV remote sensing is widely required in different scenarios. In the past 20 years, with the development of unmanned aerial vehicles (UAV), remote sensing technology, deep learning technology, and edge computing technology, research on UAV real-time object detection in different fields has become increasingly important. However, since real-time UAV object detection is a comprehensive task involving hardware, algorithms, and other components, the complete implementation of real-time object detection is often overlooked. Although there is a large amount of literature on real-time object detection based on UAV remote sensing, little attention has been given to its workflow. This paper aims to systematically review previous studies about UAV real-time object detection from application scenarios, hardware selection, real-time detection paradigms, detection algorithms and their optimization technologies, and evaluation metrics. Through visual and narrative analyses, the conclusions cover all proposed research questions. Real-time object detection is more in demand in scenarios such as emergency rescue and precision agriculture. Multi-rotor UAVs and RGB images are of more interest in applications, and real-time detection mainly uses edge computing with documented processing strategies. GPU-based edge computing platforms are widely used, and deep learning algorithms is preferred for real-time detection. Meanwhile, optimization algorithms need to be focused on resource-limited computing platform deployment, such as lightweight convolutional layers, etc. In addition to accuracy, speed, latency, and energy are equally important evaluation metrics. Finally, this paper thoroughly discusses the challenges of sensor-, edge computing-, and algorithm-related lightweight technologies in real-time object detection. It also discusses the prospective impact of future developments in autonomous UAVs and communications on UAV real-time target detection.
基于无人机遥感的实时目标检测在不同的场景中有着广泛的需求。近20年来,随着无人机(UAV)、遥感技术、深度学习技术、边缘计算技术的发展,无人机在不同领域的实时目标检测研究变得越来越重要。然而,由于实时无人机目标检测是一项涉及硬件、算法和其他组件的综合任务,因此实时目标检测的完整实现往往被忽视。虽然关于基于无人机遥感的实时目标检测的文献很多,但对其工作流程的研究却很少。本文从应用场景、硬件选择、实时检测范式、检测算法及其优化技术、评价指标等方面系统综述了无人机实时目标检测的研究现状。通过视觉和叙事分析,结论涵盖了所有提出的研究问题。实时目标检测在紧急救援和精准农业等场景中需求更大。多旋翼无人机和RGB图像在应用中更受关注,实时检测主要使用边缘计算和记录的处理策略。基于gpu的边缘计算平台被广泛使用,深度学习算法是实时检测的首选。同时,优化算法需要关注资源有限的计算平台部署,如轻量级卷积层等。除了准确性之外,速度、延迟和能量也是同样重要的评估指标。最后,本文深入讨论了传感器、边缘计算和算法相关的轻量化技术在实时目标检测中的挑战。它还讨论了自主无人机和通信对无人机实时目标探测的未来发展的预期影响。
{"title":"Real-Time Object Detection Based on UAV Remote Sensing: A Systematic Literature Review","authors":"Zhen Cao, Lammert Kooistra, Wensheng Wang, Leifeng Guo, João Valente","doi":"10.3390/drones7100620","DOIUrl":"https://doi.org/10.3390/drones7100620","url":null,"abstract":"Real-time object detection based on UAV remote sensing is widely required in different scenarios. In the past 20 years, with the development of unmanned aerial vehicles (UAV), remote sensing technology, deep learning technology, and edge computing technology, research on UAV real-time object detection in different fields has become increasingly important. However, since real-time UAV object detection is a comprehensive task involving hardware, algorithms, and other components, the complete implementation of real-time object detection is often overlooked. Although there is a large amount of literature on real-time object detection based on UAV remote sensing, little attention has been given to its workflow. This paper aims to systematically review previous studies about UAV real-time object detection from application scenarios, hardware selection, real-time detection paradigms, detection algorithms and their optimization technologies, and evaluation metrics. Through visual and narrative analyses, the conclusions cover all proposed research questions. Real-time object detection is more in demand in scenarios such as emergency rescue and precision agriculture. Multi-rotor UAVs and RGB images are of more interest in applications, and real-time detection mainly uses edge computing with documented processing strategies. GPU-based edge computing platforms are widely used, and deep learning algorithms is preferred for real-time detection. Meanwhile, optimization algorithms need to be focused on resource-limited computing platform deployment, such as lightweight convolutional layers, etc. In addition to accuracy, speed, latency, and energy are equally important evaluation metrics. Finally, this paper thoroughly discusses the challenges of sensor-, edge computing-, and algorithm-related lightweight technologies in real-time object detection. It also discusses the prospective impact of future developments in autonomous UAVs and communications on UAV real-time target detection.","PeriodicalId":36448,"journal":{"name":"Drones","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135740254","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Quantifying Within-Flight Variation in Land Surface Temperature from a UAV-Based Thermal Infrared Camera 基于无人机热红外相机的地表温度飞行内变化量化研究
2区 地球科学 Q1 REMOTE SENSING Pub Date : 2023-10-02 DOI: 10.3390/drones7100617
Jamal Elfarkh, Kasper Johansen, Victor Angulo, Omar Lopez Camargo, Matthew F. McCabe
Land Surface Temperature (LST) is a key variable used across various applications, including irrigation monitoring, vegetation health assessment and urban heat island studies. While satellites offer moderate-resolution LST data, unmanned aerial vehicles (UAVs) provide high-resolution thermal infrared measurements. However, the continuous and rapid variation in LST makes the production of orthomosaics from UAV-based image collections challenging. Understanding the environmental and meteorological factors that amplify this variation is necessary to select the most suitable conditions for collecting UAV-based thermal data. Here, we capture variations in LST while hovering for 15–20 min over diverse surfaces, covering sand, water, grass, and an olive tree orchard. The impact of different flying heights and times of the day was examined, with all collected thermal data evaluated against calibrated field-based Apogee SI-111 sensors. The evaluation showed a significant error in UAV-based data associated with wind speed, which increased the bias from −1.02 to 3.86 °C for 0.8 to 8.5 m/s winds, respectively. Different surfaces, albeit under varying ambient conditions, showed temperature variations ranging from 1.4 to 6 °C during the flights. The temperature variations observed while hovering were linked to solar radiation, specifically radiation fluctuations occurring after sunrise and before sunset. Irrigation and atmospheric conditions (i.e., thin clouds) also contributed to observed temperature variations. This research offers valuable insights into LST variations during standard 15–20 min UAV flights under diverse environmental conditions. Understanding these factors is essential for developing correction procedures and considering data inconsistencies when processing and interpreting UAV-based thermal infrared data and derived orthomosaics.
地表温度(LST)是广泛应用于灌溉监测、植被健康评估和城市热岛研究等领域的关键变量。卫星提供中等分辨率的地表温度数据,而无人机提供高分辨率的热红外测量。然而,地表温度的持续快速变化使得从基于无人机的图像集合中产生正立体图具有挑战性。了解放大这种变化的环境和气象因素对于选择最合适的条件来收集基于无人机的热数据是必要的。在这里,我们在不同的表面上悬停15-20分钟,捕捉到地表温度的变化,覆盖沙子、水、草地和橄榄树果园。研究人员检查了不同飞行高度和飞行时间的影响,并利用校准过的现场Apogee SI-111传感器评估了所有收集到的热数据。评估显示,基于无人机的数据与风速相关存在显著误差,在0.8至8.5 m/s风速下,偏差分别从- 1.02°C增加到3.86°C。在不同的环境条件下,不同的表面在飞行过程中显示出1.4到6°C的温度变化。盘旋时观测到的温度变化与太阳辐射有关,特别是日出后和日落前的辐射波动。灌溉和大气条件(即薄云)也有助于观测到的温度变化。这项研究为不同环境条件下标准15-20分钟无人机飞行期间的LST变化提供了有价值的见解。在处理和解释基于无人机的热红外数据和衍生的正交图像时,了解这些因素对于制定校正程序和考虑数据不一致性至关重要。
{"title":"Quantifying Within-Flight Variation in Land Surface Temperature from a UAV-Based Thermal Infrared Camera","authors":"Jamal Elfarkh, Kasper Johansen, Victor Angulo, Omar Lopez Camargo, Matthew F. McCabe","doi":"10.3390/drones7100617","DOIUrl":"https://doi.org/10.3390/drones7100617","url":null,"abstract":"Land Surface Temperature (LST) is a key variable used across various applications, including irrigation monitoring, vegetation health assessment and urban heat island studies. While satellites offer moderate-resolution LST data, unmanned aerial vehicles (UAVs) provide high-resolution thermal infrared measurements. However, the continuous and rapid variation in LST makes the production of orthomosaics from UAV-based image collections challenging. Understanding the environmental and meteorological factors that amplify this variation is necessary to select the most suitable conditions for collecting UAV-based thermal data. Here, we capture variations in LST while hovering for 15–20 min over diverse surfaces, covering sand, water, grass, and an olive tree orchard. The impact of different flying heights and times of the day was examined, with all collected thermal data evaluated against calibrated field-based Apogee SI-111 sensors. The evaluation showed a significant error in UAV-based data associated with wind speed, which increased the bias from −1.02 to 3.86 °C for 0.8 to 8.5 m/s winds, respectively. Different surfaces, albeit under varying ambient conditions, showed temperature variations ranging from 1.4 to 6 °C during the flights. The temperature variations observed while hovering were linked to solar radiation, specifically radiation fluctuations occurring after sunrise and before sunset. Irrigation and atmospheric conditions (i.e., thin clouds) also contributed to observed temperature variations. This research offers valuable insights into LST variations during standard 15–20 min UAV flights under diverse environmental conditions. Understanding these factors is essential for developing correction procedures and considering data inconsistencies when processing and interpreting UAV-based thermal infrared data and derived orthomosaics.","PeriodicalId":36448,"journal":{"name":"Drones","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135898012","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SODCNN: A Convolutional Neural Network Model for Small Object Detection in Drone-Captured Images 基于卷积神经网络的无人机捕获图像小目标检测模型
2区 地球科学 Q1 REMOTE SENSING Pub Date : 2023-10-01 DOI: 10.3390/drones7100615
Lu Meng, Lijun Zhou, Yangqian Liu
Drone images contain a large number of small, dense targets. And they are vital for agriculture, security, monitoring, and more. However, detecting small objects remains an unsolved challenge, as they occupy a small proportion of the image and have less distinct features. Conventional object detection algorithms fail to produce satisfactory results for small objects. To address this issue, an improved algorithm for small object detection is proposed by modifying the YOLOv7 network structure. Firstly, redundant detection head for large objects is removed, and the feature extraction for small object detection advances. Secondly, the number of anchor boxes is increased to improve the recall rate for small objects. And, considering the limitations of the CIoU loss function in optimization, the EIoU loss function is employed as the bounding box loss function, to achieve more stable and effective regression. Lastly, an attention-based feature fusion module is introduced to replace the Concat module in FPN. This module considers both global and local information, effectively addressing the challenges in multiscale and small object fusion. Experimental results on the VisDrone2019 dataset demonstrate that the proposed algorithm achieves an mAP50 of 54.03% and an mAP50:90 of 32.06%, outperforming the latest similar research papers and significantly enhancing the model’s capability for small object detection in dense scenes.
无人机图像包含大量小而密集的目标。它们对农业、安全、监控等领域至关重要。然而,检测小物体仍然是一个未解决的挑战,因为它们占图像的比例很小,特征不太明显。传统的目标检测算法对小目标的检测效果不理想。为了解决这一问题,通过修改YOLOv7网络结构,提出了一种改进的小目标检测算法。首先,去除大目标的冗余检测头,推进小目标检测的特征提取。其次,增加锚盒的数量,提高小目标的召回率;并且,考虑到CIoU损失函数在优化中的局限性,采用EIoU损失函数作为边界盒损失函数,实现更稳定有效的回归。最后,介绍了一种基于注意力的特征融合模块来取代FPN中的Concat模块。该模块考虑了全局和局部信息,有效地解决了多尺度和小目标融合的挑战。在VisDrone2019数据集上的实验结果表明,该算法的mAP50为54.03%,mAP50:90为32.06%,优于最新的同类研究论文,显著增强了模型在密集场景下的小目标检测能力。
{"title":"SODCNN: A Convolutional Neural Network Model for Small Object Detection in Drone-Captured Images","authors":"Lu Meng, Lijun Zhou, Yangqian Liu","doi":"10.3390/drones7100615","DOIUrl":"https://doi.org/10.3390/drones7100615","url":null,"abstract":"Drone images contain a large number of small, dense targets. And they are vital for agriculture, security, monitoring, and more. However, detecting small objects remains an unsolved challenge, as they occupy a small proportion of the image and have less distinct features. Conventional object detection algorithms fail to produce satisfactory results for small objects. To address this issue, an improved algorithm for small object detection is proposed by modifying the YOLOv7 network structure. Firstly, redundant detection head for large objects is removed, and the feature extraction for small object detection advances. Secondly, the number of anchor boxes is increased to improve the recall rate for small objects. And, considering the limitations of the CIoU loss function in optimization, the EIoU loss function is employed as the bounding box loss function, to achieve more stable and effective regression. Lastly, an attention-based feature fusion module is introduced to replace the Concat module in FPN. This module considers both global and local information, effectively addressing the challenges in multiscale and small object fusion. Experimental results on the VisDrone2019 dataset demonstrate that the proposed algorithm achieves an mAP50 of 54.03% and an mAP50:90 of 32.06%, outperforming the latest similar research papers and significantly enhancing the model’s capability for small object detection in dense scenes.","PeriodicalId":36448,"journal":{"name":"Drones","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135459454","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Efficient YOLOv7-Drone: An Enhanced Object Detection Approach for Drone Aerial Imagery 高效的YOLOv7-Drone:一种增强的无人机航拍图像目标检测方法
2区 地球科学 Q1 REMOTE SENSING Pub Date : 2023-10-01 DOI: 10.3390/drones7100616
Xiaofeng Fu, Guoting Wei, Xia Yuan, Yongshun Liang, Yuming Bo
In recent years, the rise of low-cost mini rotary-wing drone technology across diverse sectors has emphasized the crucial role of object detection within drone aerial imagery. Low-cost mini rotary-wing drones come with intrinsic limitations, especially in computational power. Drones come with intrinsic limitations, especially in resource availability. This context underscores an urgent need for solutions that synergize low latency, high precision, and computational efficiency. Previous methodologies have primarily depended on high-resolution images, leading to considerable computational burdens. To enhance the efficiency and accuracy of object detection in drone aerial images, and building on the YOLOv7, we propose the Efficient YOLOv7-Drone. Recognizing the common presence of small objects in aerial imagery, we eliminated the less efficient P5 detection head and incorporated the P2 detection head for increased precision in small object detection. To ensure efficient feature relay from the Backbone to the Neck, channels within the CBS module were optimized. To focus the model more on the foreground and reduce redundant computations, the TGM-CESC module was introduced, achieving the generation of pixel-level constrained sparse convolution masks. Furthermore, to mitigate potential data losses from sparse convolution, we embedded the head context-enhanced method (HCEM). Comprehensive evaluation using the VisDrone and UAVDT datasets demonstrated our model’s efficacy and practical applicability. The Efficient Yolov7-Drone achieved state-of-the-art scores while ensuring real-time detection performance.
近年来,低成本微型旋翼无人机技术在各个领域的兴起,强调了无人机航拍图像中目标检测的关键作用。低成本的小型旋翼无人机存在固有的局限性,尤其是在计算能力方面。无人机有其固有的局限性,尤其是在资源可用性方面。这种情况强调了对低延迟、高精度和计算效率的解决方案的迫切需求。以前的方法主要依赖于高分辨率图像,导致相当大的计算负担。为了提高无人机航拍图像中目标检测的效率和精度,我们在YOLOv7的基础上,提出了高效的YOLOv7- drone。认识到航空图像中常见的小目标存在,我们取消了效率较低的P5检测头,并加入了P2检测头,以提高小目标检测的精度。为了保证从主干网到主干网的有效特征中继,对CBS模块内的信道进行了优化。为了使模型更加关注前景,减少冗余计算,引入了TGM-CESC模块,实现了像素级约束稀疏卷积掩模的生成。此外,为了减少稀疏卷积带来的潜在数据丢失,我们嵌入了头部上下文增强方法(HCEM)。使用VisDrone和UAVDT数据集进行综合评估,证明了我们的模型的有效性和实用性。高效的Yolov7-Drone在确保实时检测性能的同时取得了最先进的分数。
{"title":"Efficient YOLOv7-Drone: An Enhanced Object Detection Approach for Drone Aerial Imagery","authors":"Xiaofeng Fu, Guoting Wei, Xia Yuan, Yongshun Liang, Yuming Bo","doi":"10.3390/drones7100616","DOIUrl":"https://doi.org/10.3390/drones7100616","url":null,"abstract":"In recent years, the rise of low-cost mini rotary-wing drone technology across diverse sectors has emphasized the crucial role of object detection within drone aerial imagery. Low-cost mini rotary-wing drones come with intrinsic limitations, especially in computational power. Drones come with intrinsic limitations, especially in resource availability. This context underscores an urgent need for solutions that synergize low latency, high precision, and computational efficiency. Previous methodologies have primarily depended on high-resolution images, leading to considerable computational burdens. To enhance the efficiency and accuracy of object detection in drone aerial images, and building on the YOLOv7, we propose the Efficient YOLOv7-Drone. Recognizing the common presence of small objects in aerial imagery, we eliminated the less efficient P5 detection head and incorporated the P2 detection head for increased precision in small object detection. To ensure efficient feature relay from the Backbone to the Neck, channels within the CBS module were optimized. To focus the model more on the foreground and reduce redundant computations, the TGM-CESC module was introduced, achieving the generation of pixel-level constrained sparse convolution masks. Furthermore, to mitigate potential data losses from sparse convolution, we embedded the head context-enhanced method (HCEM). Comprehensive evaluation using the VisDrone and UAVDT datasets demonstrated our model’s efficacy and practical applicability. The Efficient Yolov7-Drone achieved state-of-the-art scores while ensuring real-time detection performance.","PeriodicalId":36448,"journal":{"name":"Drones","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135452469","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Relative Localization within a Quadcopter Unmanned Aerial Vehicle Swarm Based on Airborne Monocular Vision 基于机载单目视觉的四轴飞行器群相对定位
2区 地球科学 Q1 REMOTE SENSING Pub Date : 2023-09-29 DOI: 10.3390/drones7100612
Xiaokun Si, Guozhen Xu, Mingxing Ke, Haiyan Zhang, Kaixiang Tong, Feng Qi
Swarming is one of the important trends in the development of small multi-rotor UAVs. The stable operation of UAV swarms and air-to-ground cooperative operations depend on precise relative position information within the swarm. Existing relative localization solutions mainly rely on passively received external information or expensive and complex sensors, which are not applicable to the application scenarios of small-rotor UAV swarms. Therefore, we develop a relative localization solution based on airborne monocular sensing data to directly realize real-time relative localization among UAVs. First, we apply the lightweight YOLOv8-pose target detection algorithm to realize the real-time detection of quadcopter UAVs and their rotor motors. Then, to improve the computational efficiency, we make full use of the geometric properties of UAVs to derive a more adaptable algorithm for solving the P3P problem. In order to solve the multi-solution problem when less than four motors are detected, we analytically propose a positive solution determination scheme based on reasonable attitude information. We also introduce the maximum weight of the motor-detection confidence into the calculation of relative localization position to further improve the accuracy. Finally, we conducted simulations and practical experiments on an experimental UAV. The experimental results verify the feasibility of the proposed scheme, in which the performance of the core algorithm is significantly improved over the classical algorithm. Our research provides viable solutions to free UAV swarms from external information dependence, apply them to complex environments, improve autonomous collaboration, and reduce costs.
蜂群是小型多旋翼无人机发展的重要趋势之一。无人机群的稳定运行和空对地协同作战依赖于群内精确的相对位置信息。现有的相对定位方案主要依赖于被动接收的外部信息或昂贵复杂的传感器,不适合小旋翼无人机群的应用场景。因此,我们开发了一种基于机载单目传感数据的相对定位方案,直接实现无人机之间的实时相对定位。首先,采用轻量级的yolov8姿态目标检测算法,实现对四旋翼无人机及其转子电机的实时检测。然后,为了提高计算效率,充分利用无人机的几何特性,推导出一种适应性更强的求解P3P问题的算法。为了解决电机检测不足4台时的多解问题,分析提出了一种基于合理姿态信息的正解确定方案。在相对定位位置的计算中引入电机检测置信度的最大权重,进一步提高了相对定位位置的精度。最后,在一架实验无人机上进行了仿真和实际实验。实验结果验证了所提方案的可行性,与经典算法相比,核心算法的性能得到了显著提高。我们的研究提供了可行的解决方案,使无人机群摆脱对外部信息的依赖,将其应用于复杂环境,提高自主协作能力,降低成本。
{"title":"Relative Localization within a Quadcopter Unmanned Aerial Vehicle Swarm Based on Airborne Monocular Vision","authors":"Xiaokun Si, Guozhen Xu, Mingxing Ke, Haiyan Zhang, Kaixiang Tong, Feng Qi","doi":"10.3390/drones7100612","DOIUrl":"https://doi.org/10.3390/drones7100612","url":null,"abstract":"Swarming is one of the important trends in the development of small multi-rotor UAVs. The stable operation of UAV swarms and air-to-ground cooperative operations depend on precise relative position information within the swarm. Existing relative localization solutions mainly rely on passively received external information or expensive and complex sensors, which are not applicable to the application scenarios of small-rotor UAV swarms. Therefore, we develop a relative localization solution based on airborne monocular sensing data to directly realize real-time relative localization among UAVs. First, we apply the lightweight YOLOv8-pose target detection algorithm to realize the real-time detection of quadcopter UAVs and their rotor motors. Then, to improve the computational efficiency, we make full use of the geometric properties of UAVs to derive a more adaptable algorithm for solving the P3P problem. In order to solve the multi-solution problem when less than four motors are detected, we analytically propose a positive solution determination scheme based on reasonable attitude information. We also introduce the maximum weight of the motor-detection confidence into the calculation of relative localization position to further improve the accuracy. Finally, we conducted simulations and practical experiments on an experimental UAV. The experimental results verify the feasibility of the proposed scheme, in which the performance of the core algorithm is significantly improved over the classical algorithm. Our research provides viable solutions to free UAV swarms from external information dependence, apply them to complex environments, improve autonomous collaboration, and reduce costs.","PeriodicalId":36448,"journal":{"name":"Drones","volume":"78 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135199511","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Large-Sized Multirotor Design: Accurate Modeling with Aerodynamics and Optimization for Rotor Tilt Angle 大型多旋翼设计:空气动力学精确建模及旋翼倾斜角优化
2区 地球科学 Q1 REMOTE SENSING Pub Date : 2023-09-29 DOI: 10.3390/drones7100614
Anhuan Xie, Xufei Yan, Weisheng Liang, Shiqiang Zhu, Zheng Chen
Advancements in aerial mobility (AAM) are driven by needs in transportation, logistics, rescue, and disaster relief. Consequently, large-sized multirotor unmanned aerial vehicles (UAVs) with strong power and ample space show great potential. In order to optimize the design process for large-sized multirotors and reduce physical trial and error, a detailed dynamic model is firstly established, with an accurate aerodynamic model. In addition, the center of gravity (CoG) offset and actuator dynamics are also well considered, which are usually ignored in small-sized multirotors. To improve the endurance and maneuverability of large-sized multirotors, which is the key concern in real applications, a two-loop optimization method for rotor tilt angle design is proposed based on the mathematical model established previously. Its inner loop solves the dynamic equilibrium points to relax the complex dynamic constraints caused by aerodynamics in the overall optimization problem, which improves the solution efficiency. The ideal design results can be obtained through the offline process, which greatly reduces the difficulties of physical trial and error. Finally, various experiments are carried out to demonstrate the accuracy of the established model and the effectiveness of the optimization method.
空中机动(AAM)的进步是由运输、物流、救援和救灾需求驱动的。因此,功率大、空间大的大型多旋翼无人机显示出巨大的发展潜力。为了优化大型多旋翼的设计过程,减少物理试错,首先建立了详细的动力学模型,并建立了精确的气动模型。此外,还考虑了在小型多转子中通常被忽略的重心偏移和作动器动力学。为了提高大型多旋翼在实际应用中的耐久性和机动性,在建立数学模型的基础上,提出了一种双环转子倾斜角优化设计方法。其内环求解动力平衡点,从而缓解了整体优化问题中空气动力学带来的复杂动力约束,提高了求解效率。通过离线过程可以获得理想的设计结果,大大降低了物理试错的难度。最后,通过实验验证了所建立模型的准确性和优化方法的有效性。
{"title":"Large-Sized Multirotor Design: Accurate Modeling with Aerodynamics and Optimization for Rotor Tilt Angle","authors":"Anhuan Xie, Xufei Yan, Weisheng Liang, Shiqiang Zhu, Zheng Chen","doi":"10.3390/drones7100614","DOIUrl":"https://doi.org/10.3390/drones7100614","url":null,"abstract":"Advancements in aerial mobility (AAM) are driven by needs in transportation, logistics, rescue, and disaster relief. Consequently, large-sized multirotor unmanned aerial vehicles (UAVs) with strong power and ample space show great potential. In order to optimize the design process for large-sized multirotors and reduce physical trial and error, a detailed dynamic model is firstly established, with an accurate aerodynamic model. In addition, the center of gravity (CoG) offset and actuator dynamics are also well considered, which are usually ignored in small-sized multirotors. To improve the endurance and maneuverability of large-sized multirotors, which is the key concern in real applications, a two-loop optimization method for rotor tilt angle design is proposed based on the mathematical model established previously. Its inner loop solves the dynamic equilibrium points to relax the complex dynamic constraints caused by aerodynamics in the overall optimization problem, which improves the solution efficiency. The ideal design results can be obtained through the offline process, which greatly reduces the difficulties of physical trial and error. Finally, various experiments are carried out to demonstrate the accuracy of the established model and the effectiveness of the optimization method.","PeriodicalId":36448,"journal":{"name":"Drones","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135245913","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Unmanned Aerial Vehicle (UAV)/Unmanned Ground Vehicle (UGV) Dynamic Autonomous Docking Scheme in GPS-Denied Environments 一种gps拒绝环境下的无人机/无人地面飞行器动态自主对接方案
2区 地球科学 Q1 REMOTE SENSING Pub Date : 2023-09-29 DOI: 10.3390/drones7100613
Cheng Cheng, Xiuxian Li, Lihua Xie, Li Li
This study designs a navigation and landing scheme for an unmanned aerial vehicle (UAV) to autonomously land on an arbitrarily moving unmanned ground vehicle (UGV) in GPS-denied environments based on vision, ultra-wideband (UWB) and system information. In the approaching phase, an effective multi-innovation forgetting gradient (MIFG) algorithm is proposed to estimate the position of the UAV relative to the target using historical data (estimated distance and relative displacement measurements). Using these estimates, a saturated proportional navigation controller is developed, by which the UAV can approach the target, making the UGV enter the field of view (FOV) of the camera deployed in the UAV. Then, a sensor fusion estimation algorithm based on an extended Kalman filter (EKF) is proposed to achieve accurate landing. Finally, a numerical example and a real experiment are used to support the theoretical results.
本文设计了一种基于视觉、超宽带(UWB)和系统信息的无人驾驶飞行器(UAV)导航降落方案,用于在gps拒绝环境中自主降落在任意移动的无人地面车辆(UGV)上。在逼近阶段,提出了一种有效的多创新遗忘梯度(MIFG)算法,利用历史数据(估计距离和相对位移测量值)估计无人机相对于目标的位置。利用这些估计,开发了一种饱和比例导航控制器,使无人机能够接近目标,使UGV进入部署在无人机上的摄像机的视场。然后,提出了一种基于扩展卡尔曼滤波(EKF)的传感器融合估计算法,以实现精确着陆。最后,通过数值算例和实际实验验证了理论结果。
{"title":"A Unmanned Aerial Vehicle (UAV)/Unmanned Ground Vehicle (UGV) Dynamic Autonomous Docking Scheme in GPS-Denied Environments","authors":"Cheng Cheng, Xiuxian Li, Lihua Xie, Li Li","doi":"10.3390/drones7100613","DOIUrl":"https://doi.org/10.3390/drones7100613","url":null,"abstract":"This study designs a navigation and landing scheme for an unmanned aerial vehicle (UAV) to autonomously land on an arbitrarily moving unmanned ground vehicle (UGV) in GPS-denied environments based on vision, ultra-wideband (UWB) and system information. In the approaching phase, an effective multi-innovation forgetting gradient (MIFG) algorithm is proposed to estimate the position of the UAV relative to the target using historical data (estimated distance and relative displacement measurements). Using these estimates, a saturated proportional navigation controller is developed, by which the UAV can approach the target, making the UGV enter the field of view (FOV) of the camera deployed in the UAV. Then, a sensor fusion estimation algorithm based on an extended Kalman filter (EKF) is proposed to achieve accurate landing. Finally, a numerical example and a real experiment are used to support the theoretical results.","PeriodicalId":36448,"journal":{"name":"Drones","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135246546","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Robust Collision-Free Guidance and Control for Underactuated Multirotor Aerial Vehicles 欠驱动多旋翼飞行器的鲁棒无碰撞制导与控制
2区 地球科学 Q1 REMOTE SENSING Pub Date : 2023-09-27 DOI: 10.3390/drones7100611
Jorge A. Ricardo Jr, Davi A. Santos
This paper is concerned with the robust collision-free guidance and control of underactuated multirotor aerial vehicles in the presence of moving obstacles capable of accelerating, linear velocity and rotor thrust constraints, and matched model uncertainties and disturbances. We address this problem by using a hierarchical flight control architecture composed of a supervisory outer-loop guidance module and an inner-loop stabilizing control one. The inner loop is designed using a typical hierarchical control scheme that nests the attitude control loop inside the position one. The effectiveness of this scheme relies on proper time-scale separation (TSS) between the closed-loop (faster) rotational and (slower) translational dynamics, which is not straightforward to enforce in practice. However, by combining an integral sliding mode attitude control law, which guarantees instantaneous tracking of the attitude commands, with a smooth and robust position control one, we enforce, by construction, the satisfaction of the TSS, thus avoiding the loss of robustness and use of a dull trial-and-error tweak of gains. On the other hand, the outer-loop guidance is built upon the continuous-control-obstacles method, which is incremented to respect the velocity and actuator constraints and avoid multiple moving obstacles that can accelerate. The overall method is evaluated using a numerical Monte Carlo simulation and is shown to be effective in providing satisfactory tracking performance, collision-free guidance, and the satisfaction of linear velocity and actuator constraints.
研究了欠驱动多旋翼飞行器在具有加速度、线速度和旋翼推力约束、匹配模型不确定性和干扰的移动障碍物下的鲁棒无碰撞制导与控制问题。我们使用由监督外环制导模块和内环稳定控制模块组成的分层飞行控制体系结构来解决这个问题。内环采用典型的分层控制方案,将姿态控制环嵌套在位置控制环内。该方案的有效性依赖于闭环(更快)旋转和(更慢)平移动力学之间适当的时标分离(TSS),这在实践中并不容易实现。然而,通过将保证姿态命令瞬时跟踪的积分滑模姿态控制律与平滑鲁棒的位置控制律相结合,我们通过构造来强制TSS的满足,从而避免了鲁棒性的损失和使用单调的试错调整增益。另一方面,外环制导建立在连续控制-障碍方法的基础上,该方法考虑了速度和执行器的约束,并避免了多个可以加速的运动障碍物。通过数值蒙特卡罗仿真对该方法进行了评估,结果表明该方法在提供令人满意的跟踪性能、无碰撞制导以及满足线速度和执行器约束方面是有效的。
{"title":"Robust Collision-Free Guidance and Control for Underactuated Multirotor Aerial Vehicles","authors":"Jorge A. Ricardo Jr, Davi A. Santos","doi":"10.3390/drones7100611","DOIUrl":"https://doi.org/10.3390/drones7100611","url":null,"abstract":"This paper is concerned with the robust collision-free guidance and control of underactuated multirotor aerial vehicles in the presence of moving obstacles capable of accelerating, linear velocity and rotor thrust constraints, and matched model uncertainties and disturbances. We address this problem by using a hierarchical flight control architecture composed of a supervisory outer-loop guidance module and an inner-loop stabilizing control one. The inner loop is designed using a typical hierarchical control scheme that nests the attitude control loop inside the position one. The effectiveness of this scheme relies on proper time-scale separation (TSS) between the closed-loop (faster) rotational and (slower) translational dynamics, which is not straightforward to enforce in practice. However, by combining an integral sliding mode attitude control law, which guarantees instantaneous tracking of the attitude commands, with a smooth and robust position control one, we enforce, by construction, the satisfaction of the TSS, thus avoiding the loss of robustness and use of a dull trial-and-error tweak of gains. On the other hand, the outer-loop guidance is built upon the continuous-control-obstacles method, which is incremented to respect the velocity and actuator constraints and avoid multiple moving obstacles that can accelerate. The overall method is evaluated using a numerical Monte Carlo simulation and is shown to be effective in providing satisfactory tracking performance, collision-free guidance, and the satisfaction of linear velocity and actuator constraints.","PeriodicalId":36448,"journal":{"name":"Drones","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135580071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DB-Tracker: Multi-Object Tracking for Drone Aerial Video Based on Box-MeMBer and MB-OSNet 基于Box-MeMBer和MB-OSNet的无人机航拍视频多目标跟踪
2区 地球科学 Q1 REMOTE SENSING Pub Date : 2023-09-27 DOI: 10.3390/drones7100607
Yubin Yuan, Yiquan Wu, Langyue Zhao, Jinlin Chen, Qichang Zhao
Drone aerial videos offer a promising future in modern digital media and remote sensing applications, but effectively tracking several objects in these recordings is difficult. Drone aerial footage typically includes complicated sceneries with moving objects, such as people, vehicles, and animals. Complicated scenarios such as large-scale viewing angle shifts and object crossings may occur simultaneously. Random finite sets are mixed in a detection-based tracking framework, taking the object’s location and appearance into account. It maintains the detection box information of the detected object and constructs the Box-MeMBer object position prediction framework based on the MeMBer random finite set point object tracking. We develop a hierarchical connection structure in the OSNet network, build MB-OSNet to get the object appearance information, and connect feature maps of different levels through the hierarchy such that the network may obtain rich semantic information at different sizes. Similarity measurements are computed and collected for all detections and trajectories in a cost matrix that estimates the likelihood of all possible matches. The cost matrix entries compare the similarity of tracks and detections in terms of position and appearance. The DB-Tracker algorithm performs excellently in multi-target tracking of drone aerial videos, achieving MOTA of 37.4% and 46.2% on the VisDrone and UAVDT data sets, respectively. DB-Tracker achieves high robustness by comprehensively considering the object position and appearance information, especially in handling complex scenes and target occlusion. This makes DB-Tracker a powerful tool in challenging applications such as drone aerial videos.
无人机航拍视频在现代数字媒体和遥感应用中具有广阔的前景,但在这些记录中有效跟踪多个目标是困难的。无人机的航拍镜头通常包括带有移动物体的复杂场景,如人、车辆和动物。复杂的场景,如大范围的视角移动和物体交叉可能同时发生。随机有限集混合在基于检测的跟踪框架中,考虑到目标的位置和外观。维护被检测目标的检测框信息,并基于成员随机有限设定点目标跟踪构造box -MeMBer目标位置预测框架。我们在OSNet网络中开发了层次连接结构,构建MB-OSNet获取对象外观信息,并通过层次连接不同层次的特征图,使网络可以获得不同规模的丰富语义信息。在成本矩阵中计算和收集所有检测和轨迹的相似性测量值,以估计所有可能匹配的可能性。成本矩阵条目比较轨道和检测在位置和外观方面的相似性。DB-Tracker算法在无人机航拍视频的多目标跟踪中表现优异,在VisDrone和UAVDT数据集上的MOTA分别达到37.4%和46.2%。DB-Tracker通过综合考虑目标位置和外观信息,实现了较高的鲁棒性,特别是在处理复杂场景和目标遮挡时。这使得DB-Tracker成为具有挑战性的应用程序(如无人机航拍视频)的强大工具。
{"title":"DB-Tracker: Multi-Object Tracking for Drone Aerial Video Based on Box-MeMBer and MB-OSNet","authors":"Yubin Yuan, Yiquan Wu, Langyue Zhao, Jinlin Chen, Qichang Zhao","doi":"10.3390/drones7100607","DOIUrl":"https://doi.org/10.3390/drones7100607","url":null,"abstract":"Drone aerial videos offer a promising future in modern digital media and remote sensing applications, but effectively tracking several objects in these recordings is difficult. Drone aerial footage typically includes complicated sceneries with moving objects, such as people, vehicles, and animals. Complicated scenarios such as large-scale viewing angle shifts and object crossings may occur simultaneously. Random finite sets are mixed in a detection-based tracking framework, taking the object’s location and appearance into account. It maintains the detection box information of the detected object and constructs the Box-MeMBer object position prediction framework based on the MeMBer random finite set point object tracking. We develop a hierarchical connection structure in the OSNet network, build MB-OSNet to get the object appearance information, and connect feature maps of different levels through the hierarchy such that the network may obtain rich semantic information at different sizes. Similarity measurements are computed and collected for all detections and trajectories in a cost matrix that estimates the likelihood of all possible matches. The cost matrix entries compare the similarity of tracks and detections in terms of position and appearance. The DB-Tracker algorithm performs excellently in multi-target tracking of drone aerial videos, achieving MOTA of 37.4% and 46.2% on the VisDrone and UAVDT data sets, respectively. DB-Tracker achieves high robustness by comprehensively considering the object position and appearance information, especially in handling complex scenes and target occlusion. This makes DB-Tracker a powerful tool in challenging applications such as drone aerial videos.","PeriodicalId":36448,"journal":{"name":"Drones","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135536915","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Drones
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1