首页 > 最新文献

Methods in Oceanography最新文献

英文 中文
In situ phase-domain calibration of oxygen Optodes on profiling floats 氧光电器件在剖面浮子上的原位相域校准
Pub Date : 2016-12-01 DOI: 10.1016/j.mio.2016.09.007
Robert Drucker, Stephen C. Riser

Comparison of profiles from oxygen Optodes deployed on profiling floats with ship-based bottle casts taken at the time of deployment shows typical low biases of approximately 0 to −40 μmol kg−1. Most proposed methods to correct these biases use linear or multiplicative corrections of the derived variable O2. Some of these methods depend on specific reference data such as deployment casts or air measurements. Here, we describe a versatile in situ method to recalculate O2 directly from Optode phase and temperature by recalibrating two coefficients of the modified Stern–Volmer equation. This method may be used to calibrate most floats deployed with Optodes to date, as well as present floats, including those equipped with air measurement capability. Reference data can be taken from historic ship casts, climatology, deployment casts, and/or air measurements, depending on availability.

In situ calibrations were performed on 147 Optodes floats deployed on UW floats between 2004 and 2015 using only World Ocean Database (WOD) references. Median differences to World Ocean Atlas (WOA) 2009 climatology were reduced from ∼6% to ∼1%. Deployment casts were used to estimate error for eight Argo floats deployed in the Indian and Pacific Oceans; the aggregate error was reduced from 8% to 0.3%.

Comparison of six pairs of Optodes deployed on the same float showed relative errors after in situ calibration of 0.1±0.6μmolkg1. WOD-calibrated surface air oxygen values for nineteen Optode floats with air-measurement capability were compared with expected oxygen levels from NCEP surface level pressures and showed typical errors of <±2%.

Using data from eight floats with deployment casts, comparison of phase-domain linear correction with oxygen-domain linear correction showed a difference of less than ±2%. Comparison of surface gain correction with deployment casts found gain-corrected values below the depth of the oxygen minimum to be consistently low, with residuals of approximately −0.5 to −4.5%.

将部署在轮廓浮标上的氧光电器件的轮廓与部署时采用的舰载瓶铸型进行比较,发现典型的低偏差约为0至−40 μmol kg−1。大多数提出的纠正这些偏差的方法使用衍生变量O2的线性或乘法修正。其中一些方法依赖于特定的参考数据,如部署cast或空气测量值。在这里,我们描述了一种通用的原位方法,通过重新校准修正的Stern-Volmer方程的两个系数,直接从Optode相和温度重新计算O2。该方法可用于校准迄今为止部署的大多数浮子,以及现有的浮子,包括配备空气测量能力的浮子。参考数据可以从历史船舶数据、气候学数据、部署数据和/或空气测量数据中获取,具体取决于可用性。2004年至2015年间,仅使用世界海洋数据库(World Ocean Database, WOD)参考资料,对部署在UW浮标上的147个Optodes浮标进行了原位校准。与世界海洋地图集(WOA) 2009年气候学的中位数差异从6%降至1%。部署模型用于估计部署在印度洋和太平洋的八个Argo浮标的误差;总误差从8%降低到0.3%。放置在同一浮子上的6对光电器件,原位校正后的相对误差为0.1±0.6μmolkg−1。将具有空气测量能力的19个Optode浮子的wod校准的表面空气氧气值与NCEP表面水平压力的预期氧气水平进行比较,结果显示典型误差为±2%。利用8个带有展开铸型的浮标的数据,相域线性校正与氧域线性校正的比较结果显示差异小于±2%。表面增益校正与部署铸型的比较发现,在氧气最小值深度以下的增益校正值始终较低,残差约为- 0.5至- 4.5%。
{"title":"In situ phase-domain calibration of oxygen Optodes on profiling floats","authors":"Robert Drucker,&nbsp;Stephen C. Riser","doi":"10.1016/j.mio.2016.09.007","DOIUrl":"10.1016/j.mio.2016.09.007","url":null,"abstract":"<div><p>Comparison of profiles from oxygen Optodes deployed on profiling floats with ship-based bottle casts taken at the time of deployment shows typical low biases of approximately 0 to −40 μmol kg<sup>−1</sup>. Most proposed methods to correct these biases use linear or multiplicative corrections of the derived variable <span><math><msub><mrow><mstyle><mi>O</mi></mstyle></mrow><mrow><mstyle><mi>2</mi></mstyle></mrow></msub></math></span>. Some of these methods depend on specific reference data such as deployment casts or air measurements. Here, we describe a versatile in situ method to recalculate <span><math><msub><mrow><mstyle><mi>O</mi></mstyle></mrow><mrow><mstyle><mi>2</mi></mstyle></mrow></msub></math></span> directly from Optode phase and temperature by recalibrating two coefficients of the modified Stern–Volmer equation. This method may be used to calibrate most floats deployed with Optodes to date, as well as present floats, including those equipped with air measurement capability. Reference data can be taken from historic ship casts, climatology, deployment casts, and/or air measurements, depending on availability.</p><p>In situ calibrations were performed on 147 Optodes floats deployed on UW floats between 2004 and 2015 using only World Ocean Database (WOD) references. Median differences to World Ocean Atlas (WOA) 2009 climatology were reduced from ∼6% to ∼1%. Deployment casts were used to estimate error for eight Argo floats deployed in the Indian and Pacific Oceans; the aggregate error was reduced from 8% to 0.3%.</p><p>Comparison of six pairs of Optodes deployed on the same float showed relative errors after in situ calibration of <span><math><mn>0.1</mn><mo>±</mo><mn>0.6</mn><mspace></mspace><mstyle><mi>μ</mi></mstyle><mstyle><mi>mol</mi></mstyle><mspace></mspace><msup><mrow><mstyle><mi>kg</mi></mstyle></mrow><mrow><mo>−</mo><mn>1</mn></mrow></msup></math></span>. WOD-calibrated surface air oxygen values for nineteen Optode floats with air-measurement capability were compared with expected oxygen levels from NCEP surface level pressures and showed typical errors of <span><math><mo>&lt;</mo><mo>±</mo><mn>2</mn><mi>%</mi></math></span>.</p><p>Using data from eight floats with deployment casts, comparison of phase-domain linear correction with oxygen-domain linear correction showed a difference of less than <span><math><mo>∼</mo><mo>±</mo><mn>2</mn><mi>%</mi></math></span>. Comparison of surface gain correction with deployment casts found gain-corrected values below the depth of the oxygen minimum to be consistently low, with residuals of approximately −0.5 to −4.5%.</p></div>","PeriodicalId":100922,"journal":{"name":"Methods in Oceanography","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.mio.2016.09.007","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89474543","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
Edge-based cuing for detection of benthic camouflage 基于边缘的底栖生物伪装检测方法
Pub Date : 2016-04-01 DOI: 10.1016/j.mio.2016.05.005
Lakshman Prasad , Hanumant Singh , Scott Gallager

Locating marine organisms in their natural habitats is important for understanding ocean biodiversity. Many species are often camouflaged in their surroundings, rendering them hard to detect. Our increasing ability to image large areas of the ocean floor produces millions of images, which must be inspected to spot the occasional organism. This calls for automation of camouflage detection. We investigate reliable detectability of marine camouflage by looking for structural regularities as cues to locating organisms in their natural settings. We study skates and flounder, which use different mechanisms to avoid detection. We introduce a simple edge-based criterion for detecting local structural regularity to reduce the image area to be inspected for likely presence of camouflaged organisms. This sets the stage for efficient use of more complex algorithms to confirm detections and aid in marine census. We also study the possibility of detecting octopuses based on a simple measure of texture applied to a hierarchical segmentation of octopus images.

确定海洋生物在其自然栖息地的位置对于了解海洋生物多样性非常重要。许多物种经常伪装在周围环境中,使它们很难被发现。我们对大面积海底进行成像的能力日益增强,产生了数百万张图像,必须对这些图像进行检查才能发现偶尔出现的生物。这就要求伪装探测的自动化。我们通过寻找结构规律作为在自然环境中定位生物的线索来研究海洋伪装的可靠可探测性。我们研究了溜冰鞋和比目鱼,它们使用不同的机制来避免被发现。我们引入了一种简单的基于边缘的标准来检测局部结构规则,以减少要检查的可能存在伪装生物的图像区域。这为有效使用更复杂的算法来确认探测和协助海洋普查奠定了基础。我们还研究了基于纹理的简单度量来检测章鱼的可能性,该度量应用于章鱼图像的分层分割。
{"title":"Edge-based cuing for detection of benthic camouflage","authors":"Lakshman Prasad ,&nbsp;Hanumant Singh ,&nbsp;Scott Gallager","doi":"10.1016/j.mio.2016.05.005","DOIUrl":"10.1016/j.mio.2016.05.005","url":null,"abstract":"<div><p>Locating marine organisms in their natural habitats is important for understanding ocean biodiversity. Many species are often camouflaged in their surroundings, rendering them hard to detect. Our increasing ability to image large areas of the ocean floor produces millions of images, which must be inspected to spot the occasional organism. This calls for automation of camouflage detection. We investigate reliable detectability<span> of marine camouflage by looking for structural regularities as cues to locating organisms in their natural settings. We study skates and flounder, which use different mechanisms to avoid detection. We introduce a simple edge-based criterion for detecting local structural regularity to reduce the image area to be inspected for likely presence of camouflaged organisms. This sets the stage for efficient use of more complex algorithms to confirm detections and aid in marine census. We also study the possibility of detecting octopuses based on a simple measure of texture applied to a hierarchical segmentation of octopus images.</span></p></div>","PeriodicalId":100922,"journal":{"name":"Methods in Oceanography","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2016-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.mio.2016.05.005","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86268592","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Imperfect automatic image classification successfully describes plankton distribution patterns 不完善的图像自动分类成功地描述了浮游生物的分布模式
Pub Date : 2016-04-01 DOI: 10.1016/j.mio.2016.04.003
Robin Faillettaz , Marc Picheral , Jessica Y. Luo , Cédric Guigand , Robert K. Cowen , Jean-Olivier Irisson

Imaging systems were developed to explore the fine scale distributions of plankton (<10 m), but they generate huge datasets that are still a challenge to handle rapidly and accurately. So far, imaged organisms have been either classified manually or pre-classified by a computer program and later verified by human operators. In this paper, we post-process a computer-generated classification, obtained with the common ZooProcess and PlanktonIdentifier toolchain developed for the ZooScan, and test whether the same ecological conclusions can be reached with this fully automatic dataset and with a reference, manually sorted, dataset. The Random Forest classifier outputs the probabilities that each object belongs in each class and we discard the objects with uncertain predictions, i.e. under a probability threshold defined based on a 1% error rate in a self-prediction of the learning set. Keeping only well-predicted objects enabled considerable improvements in average precision, 84% for biological groups, at the cost of diminishing recall (by 39% on average). Overall, it increased accuracy by 16%. For most groups, the automatically-predicted distributions were comparable to the reference distributions and resulted in the same size-spectra. Automatically-predicted distributions also resolved ecologically-relevant patterns, such as differences in abundance across a mesoscale front or fine-scale vertical shifts between day and night. This post-processing method is tested on the classification of plankton images through Random Forest here, but is based on basic features shared by all machine learning methods and could thus be used in a broad range of applications.

成像系统的开发是为了探索浮游生物的精细尺度分布(<10 m),但它们产生的庞大数据集仍然是快速准确处理的一个挑战。到目前为止,生物成像要么是人工分类,要么是由计算机程序预先分类,然后由人工操作员进行验证。在本文中,我们使用ZooProcess和为ZooScan开发的浮游生物识别工具链对计算机生成的分类进行后处理,并测试该全自动数据集与参考的人工排序数据集是否可以得出相同的生态结论。随机森林分类器输出每个对象属于每个类的概率,我们丢弃具有不确定预测的对象,即在基于学习集的自我预测中1%错误率定义的概率阈值下。只保留准确预测的对象,以降低召回率(平均降低39%)为代价,在生物群体中,平均精度提高了84%。总的来说,准确率提高了16%。对于大多数组,自动预测分布与参考分布相当,并且产生相同的光谱大小。自动预测的分布还解决了与生态相关的模式,例如中尺度锋面上的丰度差异或昼夜之间的精细尺度垂直变化。这种后处理方法在通过Random Forest对浮游生物图像的分类上进行了测试,但它是基于所有机器学习方法共有的基本特征,因此可以在广泛的应用中使用。
{"title":"Imperfect automatic image classification successfully describes plankton distribution patterns","authors":"Robin Faillettaz ,&nbsp;Marc Picheral ,&nbsp;Jessica Y. Luo ,&nbsp;Cédric Guigand ,&nbsp;Robert K. Cowen ,&nbsp;Jean-Olivier Irisson","doi":"10.1016/j.mio.2016.04.003","DOIUrl":"10.1016/j.mio.2016.04.003","url":null,"abstract":"<div><p>Imaging systems were developed to explore the fine scale distributions of plankton (&lt;10 m), but they generate huge datasets that are still a challenge to handle rapidly and accurately. So far, imaged organisms have been either classified manually or pre-classified by a computer program and later verified by human operators. In this paper, we post-process a computer-generated classification, obtained with the common <em>ZooProcess</em> and <em>PlanktonIdentifier</em><span><span> toolchain developed for the ZooScan, and test whether the same ecological conclusions can be reached with this fully automatic dataset and with a reference, manually sorted, dataset. The Random Forest classifier outputs the probabilities that each object belongs in each class and we discard the objects with uncertain predictions, i.e. under a probability threshold defined based on a 1% error rate in a self-prediction of the learning set. Keeping only well-predicted objects enabled considerable improvements in average precision, 84% for biological groups, at the cost of diminishing recall (by 39% on average). Overall, it increased accuracy by 16%. For most groups, the automatically-predicted distributions were comparable to the reference distributions and resulted in the same size-spectra. Automatically-predicted distributions also resolved ecologically-relevant patterns, such as differences in abundance across a mesoscale front or fine-scale vertical shifts between day and night. This post-processing method is tested on the classification of plankton images through Random Forest here, but is based on basic features shared by all </span>machine learning methods and could thus be used in a broad range of applications.</span></p></div>","PeriodicalId":100922,"journal":{"name":"Methods in Oceanography","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2016-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.mio.2016.04.003","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90247803","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 44
To sea and to see: That is the answer 出海去看,这就是答案
Pub Date : 2016-04-01 DOI: 10.1016/j.mio.2016.05.003
Jules S. Jaffe

In this article Dr. Jules S. Jaffe chronicles his development as a scientist and engineer. The story starts during his middle school years and continues up until the present day. Dr. Jaffe, as an inventor of technology for ocean exploration has played a role in a number of advances in ocean engineering. These range from the development of a planar laser imaging system for sensing fluorescent microstructure to swarms of underwater autonomous floats, to a current generation of underwater microscopes. The emphasis of the article is on career development and the process rather than the exact, and detailed, documentation of technology. Dr. Jaffe is also the Editor in Chief of Methods in Oceanography and he instituted these autobiographies for exactly this purpose: To give younger, aspiring, professionals an example of a career that has not been “straight through”, but rather a meandering path through a multitude of projects, proposals, and relationships with colleagues, students, and funding agencies.

在这篇文章中,Jules S. Jaffe博士记录了他作为一名科学家和工程师的发展历程。故事从他的中学时代开始,一直持续到今天。贾菲博士作为海洋勘探技术的发明者,在海洋工程的许多进步中发挥了作用。这些范围从用于传感荧光微观结构的平面激光成像系统的开发到水下自主浮子群,再到当前一代的水下显微镜。本文的重点是职业发展和过程,而不是技术的精确和详细的文档。Jaffe博士也是《海洋学方法》的主编,他编写这些自传正是为了这个目的:给年轻、有抱负的专业人士一个榜样,告诉他们职业生涯并非一帆风顺,而是在众多项目、提案以及与同事、学生和资助机构的关系中曲折前行。
{"title":"To sea and to see: That is the answer","authors":"Jules S. Jaffe","doi":"10.1016/j.mio.2016.05.003","DOIUrl":"10.1016/j.mio.2016.05.003","url":null,"abstract":"<div><p>In this article Dr. Jules S. Jaffe chronicles his development as a scientist and engineer. The story starts during his middle school years and continues up until the present day. Dr. Jaffe, as an inventor of technology for ocean exploration has played a role in a number of advances in ocean engineering. These range from the development of a planar laser imaging system for sensing fluorescent microstructure to swarms of underwater autonomous floats, to a current generation of underwater microscopes. The emphasis of the article is on career development and the process rather than the exact, and detailed, documentation of technology. Dr. Jaffe is also the Editor in Chief of Methods in Oceanography and he instituted these autobiographies for exactly this purpose: To give younger, aspiring, professionals an example of a career that has not been “straight through”, but rather a meandering path through a multitude of projects, proposals, and relationships with colleagues, students, and funding agencies.</p></div>","PeriodicalId":100922,"journal":{"name":"Methods in Oceanography","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2016-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.mio.2016.05.003","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88146581","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Computer Vision in Oceanography 海洋学中的计算机视觉
Pub Date : 2016-04-01 DOI: 10.1016/j.mio.2016.07.002
David Kriegman (Guest Editors), Benjamin L. Richards, Hanumant Singh
{"title":"Computer Vision in Oceanography","authors":"David Kriegman (Guest Editors),&nbsp;Benjamin L. Richards,&nbsp;Hanumant Singh","doi":"10.1016/j.mio.2016.07.002","DOIUrl":"10.1016/j.mio.2016.07.002","url":null,"abstract":"","PeriodicalId":100922,"journal":{"name":"Methods in Oceanography","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2016-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.mio.2016.07.002","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76002036","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Refractive 3D reconstruction on underwater images 水下图像的折射三维重建
Pub Date : 2016-04-01 DOI: 10.1016/j.mio.2016.03.001
Anne Jordt , Kevin Köser , Reinhard Koch

Cameras can be considered measurement devices complementary to acoustic sensors when it comes to surveying marine environments. When calibrated and used correctly, these visual sensors are well-suited for automated detection, quantification, mapping, and monitoring applications and when aiming at high-accuracy 3D models or change detection. In underwater scenarios, cameras are often set up in pressure housings with a flat glass window, a flat port, which allows them to observe the environment. In this contribution, a geometric model for image formation is discussed that explicitly considers refraction at the interface under realistic assumptions like a slightly misaligned camera (w.r.t. the glass normal) and thick glass ports as common for deep sea applications. Then, starting from camera calibration, a complete, fully automated 3D reconstruction system is discussed that takes an image sequence and produces a 3D model. Newly derived refractive estimators for sparse two-view geometry, pose estimation, bundle adjustment, and dense depth estimation are discussed and evaluated in detail.

在测量海洋环境时,相机可以被认为是声传感器的补充测量设备。当校准和正确使用时,这些视觉传感器非常适合自动检测,量化,绘图和监控应用,以及针对高精度3D模型或变化检测。在水下场景中,摄像机通常安装在带有平面玻璃窗和平面端口的压力外壳中,这样他们就可以观察环境。在这篇文章中,我们讨论了一个图像形成的几何模型,该模型明确考虑了在现实假设下的界面折射,比如轻微错位的相机(与玻璃法线相对)和深海应用中常见的厚玻璃端口。然后,从相机标定开始,讨论了一个完整的、全自动的三维重建系统,该系统采用图像序列并产生三维模型。对稀疏双视几何、位姿估计、束平差和密集深度估计等新导出的折射率估计方法进行了详细的讨论和评价。
{"title":"Refractive 3D reconstruction on underwater images","authors":"Anne Jordt ,&nbsp;Kevin Köser ,&nbsp;Reinhard Koch","doi":"10.1016/j.mio.2016.03.001","DOIUrl":"10.1016/j.mio.2016.03.001","url":null,"abstract":"<div><p>Cameras can be considered measurement devices complementary to acoustic sensors when it comes to surveying marine environments. When calibrated and used correctly, these visual sensors are well-suited for automated detection, quantification, mapping, and monitoring applications and when aiming at high-accuracy 3D models or change detection. In underwater scenarios, cameras are often set up in pressure housings with a flat glass window, a flat port, which allows them to observe the environment. In this contribution, a geometric model for image formation is discussed that explicitly considers refraction at the interface under realistic assumptions like a slightly misaligned camera (w.r.t. the glass normal) and thick glass ports as common for deep sea applications. Then, starting from camera calibration, a complete, fully automated 3D reconstruction system is discussed that takes an image sequence and produces a 3D model. Newly derived refractive estimators for sparse two-view geometry, pose estimation, bundle adjustment, and dense depth estimation are discussed and evaluated in detail.</p></div>","PeriodicalId":100922,"journal":{"name":"Methods in Oceanography","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2016-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.mio.2016.03.001","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86009202","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 48
The LOKI underwater imaging system and an automatic identification model for the detection of zooplankton taxa in the Arctic Ocean LOKI水下成像系统与北冰洋浮游动物分类自动识别模型的研究
Pub Date : 2016-04-01 DOI: 10.1016/j.mio.2016.03.003
Moritz Sebastian Schmid, Cyril Aubry, Jordan Grigor, Louis Fortier

We deployed the Lightframe On-sight Keyspecies Investigation (LOKI) system, a novel underwater imaging system providing cutting-edge imaging quality, in the Canadian Arctic during fall 2013. A Random Forests machine learning model was built to automatically identify zooplankton in LOKI images. The model successfully distinguished between 114 different categories of zooplankton and particles. The high resolution taxonomical tree included many species, stages, as well as sub-groups based on animal orientation or condition in images. Results from a machine learning regression model of prosome length (R2=0.97) were used as a key predictor in the automatic identification model. Model internal validation of the automatic identification model on test data demonstrated that the model performed with overall high accuracy (86%) and specificity (86%). This was confirmed by confusion matrices for external testing results, based on automatic identifications for 2 complete stations. For station 101, from which images had also been used for training, accuracy and specificity were 85%. For station 126, from which images had not been used to train the model, accuracy and specificity were 81%. Further comparisons between model results and microscope identifications of zooplankton in samples from the two test stations were in good agreement for most taxa. LOKI’s image quality makes it possible to build accurate automatic identification models of very high taxonomic detail, which will play a critical role in future studies of zooplankton dynamics and zooplankton coupling with other trophic levels.

2013年秋季,我们在加拿大北极地区部署了Lightframe On-sight关键物种调查(LOKI)系统,这是一种新型水下成像系统,可提供尖端的成像质量。建立随机森林机器学习模型,自动识别LOKI图像中的浮游动物。该模型成功地区分了114种不同种类的浮游动物和颗粒。高分辨率的分类树包括许多物种、阶段和亚群,基于动物的方向或图像条件。机器学习回归模型的结果(R2=0.97)被用作自动识别模型的关键预测因子。在测试数据上对自动识别模型进行了模型内部验证,结果表明该模型具有较高的总体准确度(86%)和特异性(86%)。基于2个完整站的自动识别,外部测试结果的混淆矩阵证实了这一点。对于101站,其图像也被用于训练,准确率和特异性为85%。对于未使用图像训练模型的站点126,准确率和特异性为81%。进一步比较两个试验站样品中浮游动物的模型结果和显微镜鉴定结果,大多数分类群的结果都很一致。LOKI的图像质量使其能够建立非常精确的分类细节自动识别模型,这将在未来浮游动物动力学和浮游动物与其他营养水平耦合的研究中发挥关键作用。
{"title":"The LOKI underwater imaging system and an automatic identification model for the detection of zooplankton taxa in the Arctic Ocean","authors":"Moritz Sebastian Schmid,&nbsp;Cyril Aubry,&nbsp;Jordan Grigor,&nbsp;Louis Fortier","doi":"10.1016/j.mio.2016.03.003","DOIUrl":"10.1016/j.mio.2016.03.003","url":null,"abstract":"<div><p>We deployed the Lightframe On-sight Keyspecies Investigation (LOKI) system, a novel underwater imaging system providing cutting-edge imaging quality, in the Canadian Arctic during fall 2013. A Random Forests machine learning model was built to automatically identify zooplankton in LOKI images. The model successfully distinguished between 114 different categories of zooplankton and particles. The high resolution taxonomical tree included many species, stages, as well as sub-groups based on animal orientation or condition in images. Results from a machine learning regression model of prosome length (<span><math><msup><mrow><mi>R</mi></mrow><mrow><mn>2</mn></mrow></msup><mo>=</mo><mn>0.97</mn></math></span><span>) were used as a key predictor in the automatic identification model. Model internal validation of the automatic identification model on test data demonstrated that the model performed with overall high accuracy (86%) and specificity (86%). This was confirmed by confusion matrices<span> for external testing results, based on automatic identifications for 2 complete stations. For station 101, from which images had also been used for training, accuracy and specificity were 85%. For station 126, from which images had not been used to train the model, accuracy and specificity were 81%. Further comparisons between model results and microscope identifications of zooplankton in samples from the two test stations were in good agreement for most taxa. LOKI’s image quality makes it possible to build accurate automatic identification models of very high taxonomic detail, which will play a critical role in future studies of zooplankton dynamics and zooplankton coupling with other trophic levels.</span></span></p></div>","PeriodicalId":100922,"journal":{"name":"Methods in Oceanography","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2016-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.mio.2016.03.003","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78310523","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 32
Automated classification of camouflaging cuttlefish 伪装墨鱼的自动分类
Pub Date : 2016-04-01 DOI: 10.1016/j.mio.2016.04.005
Eric C. Orenstein , Justin M. Haag , Yakir L. Gagnon , Jules S. Jaffe

The automated processing of images for scientific analysis has become an integral part of projects that collect large amounts of data. Our recent study of cuttlefish camouflaging behavior captured ∼12,000 images of the animals’ response to changing visual environments. This work presents an automated segmentation and classification workflow to alleviate the human cost of processing this complex data set. The specimens’ bodies are segmented from the background using a combination of intensity thresholding and Histogram of Oriented Gradients. Subregions are then used to train a texton-based classifier designed to codify traditional, manual methods of cuttlefish image analysis. The segmentation procedure properly selected the subregion from ∼95% of the images. The classifier achieved an accuracy of ∼94% as compared to manual annotation. Together, the process correctly processed ∼90% of the images. Additionally, we leverage the output of the classifier to propose a model of camouflage display that attributes a given display to a superposition of the user-defined classes.

用于科学分析的图像自动处理已成为收集大量数据的项目的一个组成部分。我们最近对墨鱼伪装行为的研究捕获了大约12,000张动物对不断变化的视觉环境的反应的图像。这项工作提出了一个自动分割和分类工作流程,以减轻处理这一复杂数据集的人力成本。使用强度阈值法和定向梯度直方图相结合的方法将样本体从背景中分割出来。然后使用子区域来训练基于文本的分类器,该分类器旨在编纂传统的手工墨鱼图像分析方法。分割程序正确地从~ 95%的图像中选择子区域。与手动标注相比,该分类器实现了约94%的准确率。总之,该过程正确处理了约90%的图像。此外,我们利用分类器的输出来提出一个伪装显示模型,该模型将给定的显示属性为用户定义类的叠加。
{"title":"Automated classification of camouflaging cuttlefish","authors":"Eric C. Orenstein ,&nbsp;Justin M. Haag ,&nbsp;Yakir L. Gagnon ,&nbsp;Jules S. Jaffe","doi":"10.1016/j.mio.2016.04.005","DOIUrl":"10.1016/j.mio.2016.04.005","url":null,"abstract":"<div><p>The automated processing of images for scientific analysis has become an integral part of projects that collect large amounts of data. Our recent study of cuttlefish camouflaging behavior captured ∼12,000 images of the animals’ response to changing visual environments. This work presents an automated segmentation and classification workflow to alleviate the human cost of processing this complex data set. The specimens’ bodies are segmented from the background using a combination of intensity thresholding and Histogram of Oriented Gradients. Subregions are then used to train a texton-based classifier designed to codify traditional, manual methods of cuttlefish image analysis. The segmentation procedure properly selected the subregion from ∼95% of the images. The classifier achieved an accuracy of ∼94% as compared to manual annotation. Together, the process correctly processed ∼90% of the images. Additionally, we leverage the output of the classifier to propose a model of camouflage display that attributes a given display to a superposition of the user-defined classes.</p></div>","PeriodicalId":100922,"journal":{"name":"Methods in Oceanography","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2016-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.mio.2016.04.005","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85186344","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
A computer vision approach for monitoring the spatial and temporal shrimp distribution at the LoVe observatory 利用计算机视觉方法监测洛夫天文台虾类的时空分布
Pub Date : 2016-04-01 DOI: 10.1016/j.mio.2016.03.002
Jonas Osterloff , Ingunn Nilssen , Tim W. Nattkemper

This paper demonstrates how computer vision can be applied for the automatic detection of shrimp in smaller areas of interest with a high temporal resolution for long time periods. A recorded sequence of digital HD camera images from fixed underwater observatories provides unique opportunities to study shrimp behavior in their natural environment, such as number of shrimp and their abundance at different locations (micro habitats) over time. Temporal color contrast features were applied to enable the detection of the semi-transparent shrimp. To study the spatial–temporal characteristics of the shrimp, pseudo-color visualizations referred to as shrimp abundance maps (SAM) are introduced. SAMs for different time periods are presented, to show the potential of the methodology.

本文演示了如何将计算机视觉应用于长时间高时间分辨率的小兴趣区域的虾的自动检测。从固定的水下观测站记录的一系列数字高清摄像机图像为研究虾在自然环境中的行为提供了独特的机会,例如虾的数量和它们在不同位置(微栖息地)随时间的丰度。利用时间颜色对比特征对半透明虾进行检测。为了研究虾的时空特征,引入了虾丰度图(SAM)的伪彩色可视化方法。介绍了不同时期的sam,以显示该方法的潜力。
{"title":"A computer vision approach for monitoring the spatial and temporal shrimp distribution at the LoVe observatory","authors":"Jonas Osterloff ,&nbsp;Ingunn Nilssen ,&nbsp;Tim W. Nattkemper","doi":"10.1016/j.mio.2016.03.002","DOIUrl":"10.1016/j.mio.2016.03.002","url":null,"abstract":"<div><p><span>This paper demonstrates how computer vision can be applied for the automatic detection of shrimp in smaller areas of interest with a high temporal resolution for long time periods. A recorded sequence of digital HD camera images from fixed underwater </span>observatories provides unique opportunities to study shrimp behavior in their natural environment, such as number of shrimp and their abundance at different locations (micro habitats) over time. Temporal color contrast features were applied to enable the detection of the semi-transparent shrimp. To study the spatial–temporal characteristics of the shrimp, pseudo-color visualizations referred to as shrimp abundance maps (SAM) are introduced. SAMs for different time periods are presented, to show the potential of the methodology.</p></div>","PeriodicalId":100922,"journal":{"name":"Methods in Oceanography","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2016-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.mio.2016.03.002","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90658204","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
Fully automated image segmentation for benthic resource assessment of poly-metallic nodules 多金属结核底栖生物资源评价的全自动图像分割
Pub Date : 2016-04-01 DOI: 10.1016/j.mio.2016.04.002
Timm Schoening , Thomas Kuhn , Daniel O.B. Jones , Erik Simon-Lledo , Tim W. Nattkemper

Underwater image analysis is a new field for computational pattern recognition. In academia as well as in the industry, it is more and more common to use camera-equipped stationary landers, autonomous underwater vehicles, ocean floor observatory systems or remotely operated vehicles for image based monitoring and exploration. The resulting image collections create a bottleneck for manual data interpretation owing to their size.

In this paper, the problem of measuring size and abundance of poly-metallic nodules in benthic images is considered. A foreground/background separation (i.e. separating the nodules from the surrounding sediment) is required to determine the targeted quantities. Poly-metallic nodules are compact (convex), but vary in size and appear as composites with different visual features (color, texture, etc.).

Methods for automating nodule segmentation have so far relied on manual training data. However, a hand-drawn, ground-truthed segmentation of nodules and sediment is difficult (or even impossible) to achieve for a sufficient number of images. The new ES4C algorithm (Evolutionary tuned Segmentation using Cluster Co-occurrence and a Convexity Criterion) is presented that can be applied to a segmentation task without a reference ground truth. First, a learning vector quantization groups the visual features in the images into clusters. Secondly, a segmentation function is constructed by assigning the clusters to classes automatically according to defined heuristics. Using evolutionary algorithms, a quality criterion is maximized to assign cluster prototypes to classes. This criterion integrates the morphological compactness of the nodules as well as feature similarity in different parts of nodules. To assess its applicability, the ES4C algorithm is tested with two real-world data sets. For one of these data sets, a reference gold standard is available and we report a sensitivity of 0.88 and a specificity of 0.65.

Our results show that the applied heuristics, which combine patterns in the feature domain with patterns in the spatial domain, lead to good segmentation results and allow full automation of the resource-abundance assessment for benthic poly-metallic nodules.

水下图像分析是计算模式识别的一个新领域。无论是在学术界还是在工业界,使用配备相机的固定着陆器、自主水下航行器、海底观测系统或远程操作的航行器进行基于图像的监测和探测越来越普遍。所得到的图像集合由于其大小造成了手动数据解释的瓶颈。本文研究了底栖生物图像中多金属结核的大小和丰度测量问题。需要进行前景/背景分离(即将结核与周围沉积物分离)以确定目标数量。多金属结节致密(凸),但大小不一,呈现为具有不同视觉特征(颜色、纹理等)的复合物。到目前为止,自动化模块分割的方法依赖于人工训练数据。然而,对于足够数量的图像,手工绘制的、真实的结节和沉积物分割是困难的(甚至是不可能的)。提出了一种新的ES4C算法(使用聚类共现和凸性准则的进化调谐分割),该算法可以应用于没有参考基础真值的分割任务。首先,学习向量量化将图像中的视觉特征分组成簇。其次,根据定义的启发式算法,自动将聚类划分为类,构造分割函数;利用进化算法,将质量标准最大化,将集群原型分配给类。该标准综合了结节的形态紧密性以及结节不同部位的特征相似性。为了评估其适用性,ES4C算法用两个真实世界的数据集进行了测试。对于其中一个数据集,有一个参考金标准,我们报告灵敏度为0.88,特异性为0.65。研究结果表明,将特征域模式与空间域模式相结合的启发式方法可获得较好的分割结果,实现了底栖多金属结核资源丰度评价的完全自动化。
{"title":"Fully automated image segmentation for benthic resource assessment of poly-metallic nodules","authors":"Timm Schoening ,&nbsp;Thomas Kuhn ,&nbsp;Daniel O.B. Jones ,&nbsp;Erik Simon-Lledo ,&nbsp;Tim W. Nattkemper","doi":"10.1016/j.mio.2016.04.002","DOIUrl":"10.1016/j.mio.2016.04.002","url":null,"abstract":"<div><p>Underwater image analysis is a new field for computational pattern recognition. In academia as well as in the industry, it is more and more common to use camera-equipped stationary landers, autonomous underwater vehicles, ocean floor observatory systems or remotely operated vehicles for image based monitoring and exploration. The resulting image collections create a bottleneck for manual data interpretation owing to their size.</p><p>In this paper, the problem of measuring size and abundance of poly-metallic nodules in benthic images is considered. A foreground/background separation (i.e. separating the nodules from the surrounding sediment) is required to determine the targeted quantities. Poly-metallic nodules are compact (convex), but vary in size and appear as composites with different visual features (color, texture, etc.).</p><p>Methods for automating nodule segmentation<span> have so far relied on manual training data. However, a hand-drawn, ground-truthed segmentation of nodules and sediment is difficult (or even impossible) to achieve for a sufficient number of images. The new ES4C algorithm (Evolutionary tuned Segmentation using Cluster Co-occurrence and a Convexity Criterion) is presented that can be applied to a segmentation task without a reference ground truth. First, a learning vector quantization groups the visual features in the images into clusters. Secondly, a segmentation function is constructed by assigning the clusters to classes automatically according to defined heuristics. Using evolutionary algorithms, a quality criterion is maximized to assign cluster prototypes to classes. This criterion integrates the morphological compactness of the nodules as well as feature similarity in different parts of nodules. To assess its applicability, the ES4C algorithm is tested with two real-world data sets. For one of these data sets, a reference gold standard is available and we report a sensitivity of 0.88 and a specificity of 0.65.</span></p><p>Our results show that the applied heuristics, which combine patterns in the feature domain with patterns in the spatial domain, lead to good segmentation results and allow full automation of the resource-abundance assessment for benthic poly-metallic nodules.</p></div>","PeriodicalId":100922,"journal":{"name":"Methods in Oceanography","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2016-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.mio.2016.04.002","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81197023","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 26
期刊
Methods in Oceanography
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1