首页 > 最新文献

International Journal of Image and Data Fusion最新文献

英文 中文
A fusion method for infrared and visible images based on iterative guided filtering and two channel adaptive pulse coupled neural network 基于迭代引导滤波和双通道自适应脉冲耦合神经网络的红外和可见光图像融合方法
IF 2.3 Q3 REMOTE SENSING Pub Date : 2020-08-31 DOI: 10.1080/19479832.2020.1814877
Qiufeng Fan, F. Hou, Feng Shi
ABSTRACT In order to make full use of the important features of the source image, an infrared and visible fusion method based on iterative guided filtering and two-channel adaptive pulse coupled neural network is proposed. The input image is decomposed into basic layer, small scale layer and large scale layer by an iterative guide filter. The base layer is fused by combining pixel energy and gradient energy. Then we fuse the large scale layer and small scale layer via two-channel adaptive pulse coupled neural network. The fused image is obtained by the inverse mixing multi-scale decomposition method. Experimental results show that compared with other multi-scale decomposition methods, the proposed method can better separate spatial overlapping features, and preserve more detailed information in fused image, effectively suppress artefacts.
摘要为了充分利用源图像的重要特征,提出了一种基于迭代引导滤波和双通道自适应脉冲耦合神经网络的红外与可见光融合方法。通过迭代导向滤波器将输入图像分解为基本层、小尺度层和大尺度层。通过结合像素能量和梯度能量融合底层。然后通过双通道自适应脉冲耦合神经网络融合大尺度层和小尺度层。采用逆混合多尺度分解方法得到融合图像。实验结果表明,与其他多尺度分解方法相比,该方法能更好地分离空间重叠特征,并在融合图像中保留更多细节信息,有效抑制伪影。
{"title":"A fusion method for infrared and visible images based on iterative guided filtering and two channel adaptive pulse coupled neural network","authors":"Qiufeng Fan, F. Hou, Feng Shi","doi":"10.1080/19479832.2020.1814877","DOIUrl":"https://doi.org/10.1080/19479832.2020.1814877","url":null,"abstract":"ABSTRACT In order to make full use of the important features of the source image, an infrared and visible fusion method based on iterative guided filtering and two-channel adaptive pulse coupled neural network is proposed. The input image is decomposed into basic layer, small scale layer and large scale layer by an iterative guide filter. The base layer is fused by combining pixel energy and gradient energy. Then we fuse the large scale layer and small scale layer via two-channel adaptive pulse coupled neural network. The fused image is obtained by the inverse mixing multi-scale decomposition method. Experimental results show that compared with other multi-scale decomposition methods, the proposed method can better separate spatial overlapping features, and preserve more detailed information in fused image, effectively suppress artefacts.","PeriodicalId":46012,"journal":{"name":"International Journal of Image and Data Fusion","volume":"12 1","pages":"23 - 47"},"PeriodicalIF":2.3,"publicationDate":"2020-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/19479832.2020.1814877","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45102494","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A multi-focus image fusion method based on watershed segmentation and IHS image fusion 一种基于分水岭分割和IHS图像融合的多焦点图像融合方法
IF 2.3 Q3 REMOTE SENSING Pub Date : 2020-07-29 DOI: 10.1080/19479832.2020.1791262
Shaheera Rashwan, A.E. Youssef, B. A. Youssef
ABSTRACT High magnification optical cameras, such as microscopes or macro-photography, cannot capture an object that is totally in focus. In this case, image acquisition is done by capturing the object/scene with the camera using a set of images with different focuses, then fusing to produce an ‘all-in-focus’ image that is clear everywhere. This process is called multi-focus image fusion. In this paper, a method named Watershed on Intensity Hue Saturation (WIHS) is proposed to fuse multi-focus images. First, the defocused images are fused using IHS image fusion. Then the marker controlled watershed segmentation algorithm is utilized to segment the fused image. Finally, the Sum-Modified of Laplacian is applied to measure the focus of multi-focus images on each region and the region with higher focus measure is chosen from its corresponding image to compute the all-in- focus resulted image. The experiment results show that WIHS has best performance in quantitative comparison with other methods.
高倍率光学相机,如显微镜或宏观摄影,不能捕捉到完全聚焦的物体。在这种情况下,图像采集是通过使用一组不同焦点的图像用相机捕获物体/场景,然后融合产生一个“全焦点”图像,到处都是清晰的。这个过程被称为多焦点图像融合。本文提出了一种基于灰度色相饱和度分水岭(WIHS)的多焦点图像融合方法。首先,采用IHS图像融合技术对散焦图像进行融合;然后利用标记控制分水岭分割算法对融合后的图像进行分割。最后,利用拉普拉斯算子和修正法对多焦点图像在各区域上的焦点进行测量,并从其对应的图像中选择焦点测量值较高的区域来计算得到的全焦点图像。实验结果表明,与其他方法进行定量比较,WIHS具有最好的性能。
{"title":"A multi-focus image fusion method based on watershed segmentation and IHS image fusion","authors":"Shaheera Rashwan, A.E. Youssef, B. A. Youssef","doi":"10.1080/19479832.2020.1791262","DOIUrl":"https://doi.org/10.1080/19479832.2020.1791262","url":null,"abstract":"ABSTRACT High magnification optical cameras, such as microscopes or macro-photography, cannot capture an object that is totally in focus. In this case, image acquisition is done by capturing the object/scene with the camera using a set of images with different focuses, then fusing to produce an ‘all-in-focus’ image that is clear everywhere. This process is called multi-focus image fusion. In this paper, a method named Watershed on Intensity Hue Saturation (WIHS) is proposed to fuse multi-focus images. First, the defocused images are fused using IHS image fusion. Then the marker controlled watershed segmentation algorithm is utilized to segment the fused image. Finally, the Sum-Modified of Laplacian is applied to measure the focus of multi-focus images on each region and the region with higher focus measure is chosen from its corresponding image to compute the all-in- focus resulted image. The experiment results show that WIHS has best performance in quantitative comparison with other methods.","PeriodicalId":46012,"journal":{"name":"International Journal of Image and Data Fusion","volume":"12 1","pages":"176 - 184"},"PeriodicalIF":2.3,"publicationDate":"2020-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/19479832.2020.1791262","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45069348","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Practical applications on sustainable development goals in IJIDF 可持续发展目标在IJIDF中的实际应用
IF 2.3 Q3 REMOTE SENSING Pub Date : 2020-07-02 DOI: 10.1080/19479832.2020.1755083
Jixian Zhang
Since the inception of Sustainable Development Goals (SDGs) by United Nations General Assembly in 2015, the 2030 Agenda has provided a blueprint for shared prosperity in a sustainable world where a...
自2015年联合国大会提出可持续发展目标以来,《2030年议程》为在可持续世界实现共同繁荣提供了蓝图。。。
{"title":"Practical applications on sustainable development goals in IJIDF","authors":"Jixian Zhang","doi":"10.1080/19479832.2020.1755083","DOIUrl":"https://doi.org/10.1080/19479832.2020.1755083","url":null,"abstract":"Since the inception of Sustainable Development Goals (SDGs) by United Nations General Assembly in 2015, the 2030 Agenda has provided a blueprint for shared prosperity in a sustainable world where a...","PeriodicalId":46012,"journal":{"name":"International Journal of Image and Data Fusion","volume":"11 1","pages":"215 - 217"},"PeriodicalIF":2.3,"publicationDate":"2020-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/19479832.2020.1755083","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43402067","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Accuracy analysis of Bluetooth-Low-Energy ranging and positioning in NLOS environment 近视距环境下蓝牙低能量测距定位精度分析
IF 2.3 Q3 REMOTE SENSING Pub Date : 2020-05-25 DOI: 10.1080/19479832.2020.1752314
Deng Yang, Jian Wang, Minmin Wang, Houzeng Han, Yalei Zhang
ABSTRACT Aiming at the low accuracy of Bluetooth-Low-Energy ranging and positioning in NLOS environment, a non-line of sight (NLOS) Bluetooth-Low-Energy ranging method based on the NLOS Bluetooth-Low-Energy ranging model and an NLOS Bluetooth-Low-Energy positioning algorithm based on the TLS method and the triangular positioning algorithm are proposed. Firstly, an line-of-sight (LOS) Bluetooth-Low-Energy ranging model is established by RSSI value and actual distance between the Bluetooth-Low-Energy beacon and the terminal equipment. Secondly, based on the LOS Bluetooth-Low-Energy ranging model and the threshold-corrected RSSI peak, an NLOS Bluetooth-Low-Energy ranging model is established. Thirdly, the NLOS RSSI value is processed by the NLOS Bluetooth-Low-Energy ranging model to obtain the optimal ranging value. Finally, high precision positioning coordinates can be obtained by using NLOS Bluetooth-Low-Energy positioning algorithm and optimal Bluetooth-Low-Energy ranging. In this paper, two experiments are carried out, and the experimental results show that: in the range of 7 m, the average ranging accuracy of the improved NLOS Bluetooth-Low-Energy ranging method is 0.37 m, which is improved 49.19% when compared to the traditional method’s results. The average positioning accuracy is 0.4 m using the positioning algorithm proposed in this paper. Therefore, in NLOS environment, the proposed ranging method and positioning algorithm can significantly improve the accuracy of Bluetooth-Low-Energy ranging and positioning.
针对NLOS环境下蓝牙低能量测距定位精度低的问题,提出了一种基于NLOS蓝牙低能量测距模型的非视距(NLOS)蓝牙低能量测距方法和一种基于TLS方法和三角定位算法的NLOS蓝牙低能量定位算法。首先,根据RSSI值和蓝牙低能信标与终端设备的实际距离建立视距(LOS)蓝牙低能测距模型;其次,基于LOS蓝牙-低能量测距模型和阈值校正后的RSSI峰值,建立了NLOS蓝牙-低能量测距模型;第三,利用NLOS蓝牙-低能量测距模型对NLOS RSSI值进行处理,得到最优测距值。最后,采用NLOS低功耗蓝牙定位算法和最优蓝牙低功耗测距,获得高精度的定位坐标。本文进行了两次实验,实验结果表明:在7 m范围内,改进的NLOS蓝牙低能量测距方法的平均测距精度为0.37 m,比传统方法的结果提高了49.19%。采用本文提出的定位算法,平均定位精度为0.4 m。因此,在NLOS环境下,本文提出的测距方法和定位算法可以显著提高蓝牙低功耗测距定位的精度。
{"title":"Accuracy analysis of Bluetooth-Low-Energy ranging and positioning in NLOS environment","authors":"Deng Yang, Jian Wang, Minmin Wang, Houzeng Han, Yalei Zhang","doi":"10.1080/19479832.2020.1752314","DOIUrl":"https://doi.org/10.1080/19479832.2020.1752314","url":null,"abstract":"ABSTRACT Aiming at the low accuracy of Bluetooth-Low-Energy ranging and positioning in NLOS environment, a non-line of sight (NLOS) Bluetooth-Low-Energy ranging method based on the NLOS Bluetooth-Low-Energy ranging model and an NLOS Bluetooth-Low-Energy positioning algorithm based on the TLS method and the triangular positioning algorithm are proposed. Firstly, an line-of-sight (LOS) Bluetooth-Low-Energy ranging model is established by RSSI value and actual distance between the Bluetooth-Low-Energy beacon and the terminal equipment. Secondly, based on the LOS Bluetooth-Low-Energy ranging model and the threshold-corrected RSSI peak, an NLOS Bluetooth-Low-Energy ranging model is established. Thirdly, the NLOS RSSI value is processed by the NLOS Bluetooth-Low-Energy ranging model to obtain the optimal ranging value. Finally, high precision positioning coordinates can be obtained by using NLOS Bluetooth-Low-Energy positioning algorithm and optimal Bluetooth-Low-Energy ranging. In this paper, two experiments are carried out, and the experimental results show that: in the range of 7 m, the average ranging accuracy of the improved NLOS Bluetooth-Low-Energy ranging method is 0.37 m, which is improved 49.19% when compared to the traditional method’s results. The average positioning accuracy is 0.4 m using the positioning algorithm proposed in this paper. Therefore, in NLOS environment, the proposed ranging method and positioning algorithm can significantly improve the accuracy of Bluetooth-Low-Energy ranging and positioning.","PeriodicalId":46012,"journal":{"name":"International Journal of Image and Data Fusion","volume":"11 1","pages":"356 - 374"},"PeriodicalIF":2.3,"publicationDate":"2020-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/19479832.2020.1752314","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48013980","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Enabling real-time and high accuracy tracking with COTS RFID devices 使用COTS RFID设备实现实时高精度跟踪
IF 2.3 Q3 REMOTE SENSING Pub Date : 2020-04-21 DOI: 10.1080/19479832.2020.1752315
K. Zhao, Binghao Li
ABSTRACT RFID technology has been widely used for object tracking in the industry. Very high accuracy (centimetre level) positioning based on RFID carrier phase measurement has been reported. However, most of the proposed fine-grained tracking methods can only work with very strict preconditions. For example, some methods require either the reader or the tag to move along a certain one dimensional track at constant speed, while other methods need pre-deployed tags at known locations as reference points. This paper proposes a new approach which can track RFID tags without knowing the speed, the track and the initial location of the tag, and requiring only the antenna coordinates. The experiment results show that the algorithm can converge within 10 seconds and the average positioning accuracy can be at the centimetre level or even sub-centimetre level under different preconditions. The algorithm has also been optimised with a prediction-updating procedure to meet the requirements of post-processing and real-time tracking.
摘要RFID技术已在工业中广泛应用于物体跟踪。已经报道了基于RFID载波相位测量的非常高精度(厘米级)定位。然而,大多数提出的细粒度跟踪方法只能在非常严格的前提条件下工作。例如,一些方法需要读取器或标签以恒定速度沿着某个一维轨道移动,而其他方法需要在已知位置预先部署标签作为参考点。本文提出了一种新的方法,可以在不知道标签的速度、轨迹和初始位置的情况下跟踪RFID标签,并且只需要天线坐标。实验结果表明,该算法可以在10秒内收敛,在不同的前提条件下,平均定位精度可以达到厘米级甚至亚厘米级。该算法还通过预测更新程序进行了优化,以满足后处理和实时跟踪的要求。
{"title":"Enabling real-time and high accuracy tracking with COTS RFID devices","authors":"K. Zhao, Binghao Li","doi":"10.1080/19479832.2020.1752315","DOIUrl":"https://doi.org/10.1080/19479832.2020.1752315","url":null,"abstract":"ABSTRACT RFID technology has been widely used for object tracking in the industry. Very high accuracy (centimetre level) positioning based on RFID carrier phase measurement has been reported. However, most of the proposed fine-grained tracking methods can only work with very strict preconditions. For example, some methods require either the reader or the tag to move along a certain one dimensional track at constant speed, while other methods need pre-deployed tags at known locations as reference points. This paper proposes a new approach which can track RFID tags without knowing the speed, the track and the initial location of the tag, and requiring only the antenna coordinates. The experiment results show that the algorithm can converge within 10 seconds and the average positioning accuracy can be at the centimetre level or even sub-centimetre level under different preconditions. The algorithm has also been optimised with a prediction-updating procedure to meet the requirements of post-processing and real-time tracking.","PeriodicalId":46012,"journal":{"name":"International Journal of Image and Data Fusion","volume":"11 1","pages":"251 - 267"},"PeriodicalIF":2.3,"publicationDate":"2020-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/19479832.2020.1752315","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45455733","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Machine learning on high performance computing for urban greenspace change detection: satellite image data fusion approach 基于高性能计算的机器学习城市绿地变化检测:卫星图像数据融合方法
IF 2.3 Q3 REMOTE SENSING Pub Date : 2020-04-10 DOI: 10.1080/19479832.2020.1749142
Nilkamal More, V. Nikam, Biplab Banerjee
ABSTRACT Green spaces serve important environmental and quality-of-life functions in urban environments. Fast-changing urban regions require continuous and fast green space change detection. This study focuses on assessment of green space change detection using GPU- for time efficient green space identification and monitoring. Using spatio-temporal data from satellite images and a support vector machine (SVM) as a classification algorithm, this research proposes a platform for green space analysis and change detection. The main contributions of this research include the fusion of the thermal band in addition to Near infra-red, red, green band with the fusion of high spectral information of the moderate resolution imaging spectroradiometer (MODIS) dataset and high spatial information of the LANDSAT 7 dataset. The novel method is employed to calculate the total green space area in the Mumbai metropolitan area and monitor the changes from 2005–2019. This research paper discusses the findings of our strategy and reveals that over the course of 15 years the overall green space was reduced to 50%.
绿色空间在城市环境中具有重要的环境和生活质量功能。快速变化的城市区域需要连续快速的绿地变化检测。本研究的重点是评估使用GPU的绿色空间变化检测-为时间高效的绿色空间识别和监测。本研究利用卫星影像的时空数据,以支持向量机(SVM)为分类算法,提出了一个绿地分析与变化检测平台。本研究的主要贡献包括融合中分辨率成像光谱辐射计(MODIS)数据集的高光谱信息和LANDSAT 7数据集的高空间信息,实现了除近红外、红、绿波段外的热波段融合。采用该方法计算了孟买大都市区2005-2019年的绿地总面积,并对其变化进行了监测。这篇研究论文讨论了我们的战略发现,并揭示了在15年的过程中,整体绿色空间减少到50%。
{"title":"Machine learning on high performance computing for urban greenspace change detection: satellite image data fusion approach","authors":"Nilkamal More, V. Nikam, Biplab Banerjee","doi":"10.1080/19479832.2020.1749142","DOIUrl":"https://doi.org/10.1080/19479832.2020.1749142","url":null,"abstract":"ABSTRACT Green spaces serve important environmental and quality-of-life functions in urban environments. Fast-changing urban regions require continuous and fast green space change detection. This study focuses on assessment of green space change detection using GPU- for time efficient green space identification and monitoring. Using spatio-temporal data from satellite images and a support vector machine (SVM) as a classification algorithm, this research proposes a platform for green space analysis and change detection. The main contributions of this research include the fusion of the thermal band in addition to Near infra-red, red, green band with the fusion of high spectral information of the moderate resolution imaging spectroradiometer (MODIS) dataset and high spatial information of the LANDSAT 7 dataset. The novel method is employed to calculate the total green space area in the Mumbai metropolitan area and monitor the changes from 2005–2019. This research paper discusses the findings of our strategy and reveals that over the course of 15 years the overall green space was reduced to 50%.","PeriodicalId":46012,"journal":{"name":"International Journal of Image and Data Fusion","volume":"11 1","pages":"218 - 232"},"PeriodicalIF":2.3,"publicationDate":"2020-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/19479832.2020.1749142","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44491658","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
A multi-sensor-based evaluation of the morphometric characteristics of Opa river basin in Southwest Nigeria 基于多传感器的尼日利亚西南部Opa河流域形态特征评估
IF 2.3 Q3 REMOTE SENSING Pub Date : 2020-04-02 DOI: 10.1080/19479832.2019.1683622
A. O. Adewole, Felix Ike, A. Eludoyin
ABSTRACT Studies have shown that many river basins in the sub-Saharan Africa are largely unmonitored, partly because they are poorly or totally ungauged. In this study, remote sensing products (Landsat, Advanced Spaceborne Thermal Emission and Reflection Radiometer; ASTER and Shuttle Radar Topography Mission; SRTM) that are freely available in the region were harnessed for the monitoring of Opa river basin in southwestern Nigeria. The remote sensing products were complementarily used with topographical sheets (1:50,000), ground based observation and global positioning systems to determine selected morphometric characteristics as well as changes in landuse/landcover and its impact on peak runoff in the Opa river basin. Results showed that the basin is a 5th order basin whose land area has been subjected to different natural and anthropogenic influences within the study period. Urbanisation is a major factor that threatens the basin with degradation and observed changes, and the threats are expected to become worse if restoration is not considered from some tributaries. The study concluded that commentary use of available remote sensing products in the region will provide an important level of decision support information for management and monitoring of river basins.
研究表明,撒哈拉以南非洲的许多河流流域在很大程度上没有受到监测,部分原因是它们很差或完全没有测量。在本研究中,遥感产品(Landsat,先进星载热发射与反射辐射计;ASTER和航天飞机雷达地形任务;该区域免费提供的SRTM)被用于监测尼日利亚西南部的奥帕河流域。遥感产品与地形片(1:50 000)、地面观测和全球定位系统相辅相成,以确定选定的形态特征、土地利用/土地覆盖的变化及其对欧帕河流域峰值径流的影响。结果表明,该盆地为五级盆地,其土地面积在研究期间受到了不同的自然和人为影响。城市化是威胁流域退化和观察到的变化的主要因素,如果不考虑从一些支流恢复,预计威胁会变得更糟。该研究的结论是,该地区现有遥感产品的评论使用将为流域的管理和监测提供重要的决策支持信息。
{"title":"A multi-sensor-based evaluation of the morphometric characteristics of Opa river basin in Southwest Nigeria","authors":"A. O. Adewole, Felix Ike, A. Eludoyin","doi":"10.1080/19479832.2019.1683622","DOIUrl":"https://doi.org/10.1080/19479832.2019.1683622","url":null,"abstract":"ABSTRACT Studies have shown that many river basins in the sub-Saharan Africa are largely unmonitored, partly because they are poorly or totally ungauged. In this study, remote sensing products (Landsat, Advanced Spaceborne Thermal Emission and Reflection Radiometer; ASTER and Shuttle Radar Topography Mission; SRTM) that are freely available in the region were harnessed for the monitoring of Opa river basin in southwestern Nigeria. The remote sensing products were complementarily used with topographical sheets (1:50,000), ground based observation and global positioning systems to determine selected morphometric characteristics as well as changes in landuse/landcover and its impact on peak runoff in the Opa river basin. Results showed that the basin is a 5th order basin whose land area has been subjected to different natural and anthropogenic influences within the study period. Urbanisation is a major factor that threatens the basin with degradation and observed changes, and the threats are expected to become worse if restoration is not considered from some tributaries. The study concluded that commentary use of available remote sensing products in the region will provide an important level of decision support information for management and monitoring of river basins.","PeriodicalId":46012,"journal":{"name":"International Journal of Image and Data Fusion","volume":"11 1","pages":"185 - 200"},"PeriodicalIF":2.3,"publicationDate":"2020-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/19479832.2019.1683622","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47608661","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Video-based salient object detection using hybrid optimisation strategy and contourlet mapping 基于混合优化策略和contourlet映射的视频显著目标检测
IF 2.3 Q3 REMOTE SENSING Pub Date : 2020-04-02 DOI: 10.1080/19479832.2019.1683625
S. A., H. N. Suresh
ABSTRACT The advancements in salient object detection have attracted many researchers and are significant in several computer vision applications. However, efficient salient object detection using still images is a major challenge. This paper proposes salient object detection technique using the proposed Spider-Gray Wolf Optimiser (S-GWO) algorithm that is designed by combining Gray Wolf Optimiser (GWO) and Spider Monkey Optimisation (SMO). The technique undergoes three steps, which involves keyframe extraction, saliency mapping, contourlet mapping, and then, fusion of obtained outputs using optimal coefficients. Initially, the extracted frames are subjected to saliency mapping and contourlet mapping simultaneously in order to determine the quality of each pixel. Then, the outputs obtained from the saliency mapping and contourlet mapping is fused using the arbitrary coefficients for obtaining the final result that is employed for detecting the salient objects. Here, the proposed S-GWO is employed for selecting the optimal coefficients for detecting the salient objects. The experimental evaluation of the proposed S-GWO based on the performance metrics reveals that the proposed S-GWO attained a maximal accuracy, sensitivity and specificity with 0.914, 0.861 and 0.929, respectively.
摘要显著目标检测的进步吸引了许多研究人员,并在计算机视觉应用中具有重要意义。然而,使用静止图像进行有效的显著对象检测是一个主要挑战。本文提出了一种显著目标检测技术,该算法是将灰狼优化器(GWO)和蜘蛛猴优化器(SMO)相结合设计的。该技术经历了三个步骤,包括关键帧提取、显著性映射、轮廓映射,然后使用最优系数融合所获得的输出。最初,提取的帧同时进行显著性映射和轮廓映射,以确定每个像素的质量。然后,使用任意系数对从显著性映射和轮廓映射获得的输出进行融合,以获得用于检测显著对象的最终结果。这里,所提出的S-GWO用于选择用于检测显著对象的最优系数。基于性能指标对所提出的S-GWO进行的实验评估表明,所提出的S-GWO获得了最大的准确性、敏感性和特异性,分别为0.914、0.861和0.929。
{"title":"Video-based salient object detection using hybrid optimisation strategy and contourlet mapping","authors":"S. A., H. N. Suresh","doi":"10.1080/19479832.2019.1683625","DOIUrl":"https://doi.org/10.1080/19479832.2019.1683625","url":null,"abstract":"ABSTRACT The advancements in salient object detection have attracted many researchers and are significant in several computer vision applications. However, efficient salient object detection using still images is a major challenge. This paper proposes salient object detection technique using the proposed Spider-Gray Wolf Optimiser (S-GWO) algorithm that is designed by combining Gray Wolf Optimiser (GWO) and Spider Monkey Optimisation (SMO). The technique undergoes three steps, which involves keyframe extraction, saliency mapping, contourlet mapping, and then, fusion of obtained outputs using optimal coefficients. Initially, the extracted frames are subjected to saliency mapping and contourlet mapping simultaneously in order to determine the quality of each pixel. Then, the outputs obtained from the saliency mapping and contourlet mapping is fused using the arbitrary coefficients for obtaining the final result that is employed for detecting the salient objects. Here, the proposed S-GWO is employed for selecting the optimal coefficients for detecting the salient objects. The experimental evaluation of the proposed S-GWO based on the performance metrics reveals that the proposed S-GWO attained a maximal accuracy, sensitivity and specificity with 0.914, 0.861 and 0.929, respectively.","PeriodicalId":46012,"journal":{"name":"International Journal of Image and Data Fusion","volume":"11 1","pages":"162 - 184"},"PeriodicalIF":2.3,"publicationDate":"2020-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/19479832.2019.1683625","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44951669","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Improved auto-extrinsic calibration between stereo vision camera and laser range finder 改进了立体视觉相机与激光测距仪之间的自外部标定
IF 2.3 Q3 REMOTE SENSING Pub Date : 2020-02-28 DOI: 10.1080/19479832.2020.1727574
Archana Khurana, K. S. Nagla
ABSTRACT This study identifies a way to accurately estimate extrinsic calibration parameters between stereo vision camera and 2D laser range finder (LRF) based on 3D reconstruction of monochromatic calibration board and geometric co-planarity constraints between the views from these two sensors. It supports automatic extraction of plane-line correspondences between camera and LRF using monochromatic board, which is further improved by selecting optimal threshold values for laser scan dissection to extract line features from LRF data. Calibration parameters are then obtained by solving co-planarity constraints between the estimated plane and line. Furthermore, the obtained parameters are refined by minimising reprojection error and error from the co-planarity constraints. Moreover, calibration accuracy is achieved because of extraction of reliable plane-line correspondence using monochromatic board which reduces the impact of range-reflectivity-bias observed in LRF data on checkerboard . As the proposed method supports to automatically extract feature correspondences, it provides a major reduction in time required from an operator in comparison to manual methods. The performance is validated by extensive experimentation and simulation, and estimated parameters from the proposed method demonstrate better accuracy than conventional methods.
摘要本研究基于单色校准板的三维重建和两个传感器视图之间的几何共面性约束,确定了一种准确估计立体视觉相机和二维激光测距仪(LRF)之间外部校准参数的方法。它支持使用单色板自动提取相机和LRF之间的平面线对应关系,并通过选择激光扫描解剖的最佳阈值从LRF数据中提取线特征来进一步改进。然后通过求解估计平面和直线之间的共面性约束来获得校准参数。此外,通过最小化重投影误差和来自共面性约束的误差来细化所获得的参数。此外,由于使用单色板提取了可靠的平面线对应关系,从而降低了在LRF数据中观察到的距离反射率偏差对棋盘的影响,因此实现了校准精度。由于所提出的方法支持自动提取特征对应关系,因此与手动方法相比,它大大减少了操作员所需的时间。通过大量的实验和仿真验证了该性能,所提出的方法估计的参数比传统方法具有更好的精度。
{"title":"Improved auto-extrinsic calibration between stereo vision camera and laser range finder","authors":"Archana Khurana, K. S. Nagla","doi":"10.1080/19479832.2020.1727574","DOIUrl":"https://doi.org/10.1080/19479832.2020.1727574","url":null,"abstract":"ABSTRACT This study identifies a way to accurately estimate extrinsic calibration parameters between stereo vision camera and 2D laser range finder (LRF) based on 3D reconstruction of monochromatic calibration board and geometric co-planarity constraints between the views from these two sensors. It supports automatic extraction of plane-line correspondences between camera and LRF using monochromatic board, which is further improved by selecting optimal threshold values for laser scan dissection to extract line features from LRF data. Calibration parameters are then obtained by solving co-planarity constraints between the estimated plane and line. Furthermore, the obtained parameters are refined by minimising reprojection error and error from the co-planarity constraints. Moreover, calibration accuracy is achieved because of extraction of reliable plane-line correspondence using monochromatic board which reduces the impact of range-reflectivity-bias observed in LRF data on checkerboard . As the proposed method supports to automatically extract feature correspondences, it provides a major reduction in time required from an operator in comparison to manual methods. The performance is validated by extensive experimentation and simulation, and estimated parameters from the proposed method demonstrate better accuracy than conventional methods.","PeriodicalId":46012,"journal":{"name":"International Journal of Image and Data Fusion","volume":"12 1","pages":"122 - 154"},"PeriodicalIF":2.3,"publicationDate":"2020-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/19479832.2020.1727574","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42560550","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
An optimised multi-scale fusion method for airport detection in large-scale optical remote sensing images 大尺度光学遥感影像机场检测的多尺度融合优化方法
IF 2.3 Q3 REMOTE SENSING Pub Date : 2020-02-20 DOI: 10.1080/19479832.2020.1727573
Shoulin Yin, Hang Li, Lin Teng, Man Jiang, Shahid Karim
ABSTRACT Airport detection in remote sensing images is an important process which plays a significant role in military and civil areas. Mostly, conventional algorithms have been used for airport detection from a small-scale remote sensing image and revealed the less efficient ability of searching the object from a large-scale high-resolution remote sensing image. The computational complexity of these algorithms is high and these are not useful for rapid localisation with high detection accuracy in high-resolution remote sensing images. Aiming to solve the above problems, we propose an optimised multi-scale fusion method for airport detection in large-scale optical remote sensing images. Firstly, we execute discrete wavelet multi-scale decomposition for remote sensing image and extract the multiple features of the object in each sub-band. Secondly, the fusion rule based on the optimised region selection is used to fuse the features on each scale. Meanwhile, singular-value decomposition (SVD) is utilised for fusing low-frequency and principal component analysis (PCA) is utilised to fuse the high-frequency, respectively. Thirdly, the final-fused image is acquired by weighted fusion. Finally, the selective search method is employed to detect the airport in the fused image. Experimental results show that the detection accuracy is better than the other state-of-the-art methods.
遥感图像中的机场检测是一个重要的过程,在军事和民用领域都有着重要的作用。传统的机场检测算法大多用于小尺度遥感图像的机场检测,在大尺度高分辨率遥感图像中搜索目标的效率较低。这些算法的计算复杂度较高,不利于高分辨率遥感图像中具有高检测精度的快速定位。针对上述问题,提出了一种优化的大尺度光学遥感影像机场检测多尺度融合方法。首先,对遥感图像进行离散小波多尺度分解,在每个子波段提取目标的多个特征;其次,采用基于优化区域选择的融合规则对各尺度特征进行融合;同时,利用奇异值分解(SVD)进行低频融合,利用主成分分析(PCA)进行高频融合。第三步,对最终融合图像进行加权融合;最后,采用选择性搜索方法对融合图像中的机场进行检测。实验结果表明,该方法具有较好的检测精度。
{"title":"An optimised multi-scale fusion method for airport detection in large-scale optical remote sensing images","authors":"Shoulin Yin, Hang Li, Lin Teng, Man Jiang, Shahid Karim","doi":"10.1080/19479832.2020.1727573","DOIUrl":"https://doi.org/10.1080/19479832.2020.1727573","url":null,"abstract":"ABSTRACT Airport detection in remote sensing images is an important process which plays a significant role in military and civil areas. Mostly, conventional algorithms have been used for airport detection from a small-scale remote sensing image and revealed the less efficient ability of searching the object from a large-scale high-resolution remote sensing image. The computational complexity of these algorithms is high and these are not useful for rapid localisation with high detection accuracy in high-resolution remote sensing images. Aiming to solve the above problems, we propose an optimised multi-scale fusion method for airport detection in large-scale optical remote sensing images. Firstly, we execute discrete wavelet multi-scale decomposition for remote sensing image and extract the multiple features of the object in each sub-band. Secondly, the fusion rule based on the optimised region selection is used to fuse the features on each scale. Meanwhile, singular-value decomposition (SVD) is utilised for fusing low-frequency and principal component analysis (PCA) is utilised to fuse the high-frequency, respectively. Thirdly, the final-fused image is acquired by weighted fusion. Finally, the selective search method is employed to detect the airport in the fused image. Experimental results show that the detection accuracy is better than the other state-of-the-art methods.","PeriodicalId":46012,"journal":{"name":"International Journal of Image and Data Fusion","volume":"11 1","pages":"201 - 214"},"PeriodicalIF":2.3,"publicationDate":"2020-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/19479832.2020.1727573","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44338123","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 30
期刊
International Journal of Image and Data Fusion
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1