Chentao Wang, Ming Deng, Nini Duan, Xiaoxi Ma, M. Wang
Abstract. This paper proposes a method for acquiring complete status information and data from the marine controlled-source electromagnetic (MCSEM) transmitter during offshore experiments. The subordinate machine system is constructed on the STM32 platform and incorporates a real-time operating system. It utilizes the internet of things (IoT) concept to interconnect various modules within the transmitter, enabling intelligent control and management. At the same time, data are uploaded to the control room on the deck through photoelectric composite cables, and the host computer's software, designed with Python language, will process and store all the data. This allows workers on the deck to control the subordinate computer and obtain high-precision, complete data in real time. The joint tests between the subordinate and host computers have demonstrated the stability and reliability of the online transmitter system, which provides significant convenience for offshore exploration.
{"title":"Research on online data transmission technology in a marine controlled-source electromagnetic transmitter","authors":"Chentao Wang, Ming Deng, Nini Duan, Xiaoxi Ma, M. Wang","doi":"10.5194/gi-12-187-2023","DOIUrl":"https://doi.org/10.5194/gi-12-187-2023","url":null,"abstract":"Abstract. This paper proposes a method for acquiring complete status information and data from the marine controlled-source electromagnetic (MCSEM) transmitter during offshore experiments. The subordinate machine system is constructed on the STM32 platform and incorporates a real-time operating system. It utilizes the internet of things (IoT) concept to interconnect various modules within the transmitter, enabling intelligent control and management. At the same time, data are uploaded to the control room on the deck through photoelectric composite cables, and the host computer's software, designed with Python language, will process and store all the data. This allows workers on the deck to control the subordinate computer and obtain high-precision, complete data in real time. The joint tests between the subordinate and host computers have demonstrated the stability and reliability of the online transmitter system, which provides significant convenience for offshore exploration.\u0000","PeriodicalId":48742,"journal":{"name":"Geoscientific Instrumentation Methods and Data Systems","volume":" ","pages":""},"PeriodicalIF":1.8,"publicationDate":"2023-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43362172","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Pallotta, S. A. de Carvalho, F. Lopes, A. Cacheffo, E. Landulfo, H. Barbosa
Abstract. Atmospheric lidars can simultaneously measure clouds and aerosols with high temporal and spatial resolution and hence help understand cloud–aerosol interactions, which are the source of major uncertainties in future climate projections. However, atmospheric lidars are typically custom-built, with significant differences between them. In this sense, lidar networks play a crucial role as they coordinate the efforts of different groups, provide guidelines for quality-assured routine measurements and opportunities for side-by-side instrument comparisons, and enforce algorithm validation, all aiming to homogenize the physical retrievals from heterogeneous instruments in a network. Here we provide a high-level overview of the Lidar Processing Pipeline (LPP), an ongoing, collaborative, and open-source coordinated effort in Latin America. The LPP is a collection of tools with the ultimate goal of handling all the steps of a typical analysis of lidar measurements. The modular and configurable framework is generic enough to be applicable to any lidar instrument. The first publicly released version of the LPP produces data files at levels 0 (raw and metadata), 1 (averaging and layer mask), and 2 (aerosol optical properties). We assess the performance of the LPP through quantitative and qualitative analyses of simulated and measured elastic lidar signals. For noiseless synthetic 532 nm elastic signals with a constant lidar ratio (LR), the root mean square error (RMSE) in aerosol extinction within the boundary layer is about 0.1 %. In contrast, retrievals of aerosol backscatter from noisy elastic signals with a variable LR have an RMSE of 11 %, mostly due to assuming a constant LR in the inversion. The application of the LPP for measurements in São Paulo, further constrained by co-located AERONET data, retrieved a lidar ratio of 69.9 ± 5.2 sr at 532 nm, in agreement with reported values for urban aerosols. Over the Amazon, analysis of a 6 km thick multi-layer cirrus found a cloud optical depth of about 0.46, also in agreement with previous studies. From this exercise, we identify the need for new features and discuss a roadmap to guide future development, accommodating the needs of our community.
{"title":"Collaborative development of the Lidar Processing Pipeline (LPP) for retrievals of atmospheric aerosols and clouds","authors":"J. Pallotta, S. A. de Carvalho, F. Lopes, A. Cacheffo, E. Landulfo, H. Barbosa","doi":"10.5194/gi-12-171-2023","DOIUrl":"https://doi.org/10.5194/gi-12-171-2023","url":null,"abstract":"Abstract. Atmospheric lidars can simultaneously measure clouds and aerosols with high temporal and spatial resolution and hence help understand cloud–aerosol interactions, which are the source of major uncertainties in future climate projections. However, atmospheric lidars are typically custom-built, with significant differences between them. In this sense, lidar networks play a crucial role as they coordinate the efforts of different groups, provide guidelines for quality-assured routine measurements and opportunities for side-by-side instrument comparisons, and enforce algorithm validation, all aiming to homogenize the physical retrievals from heterogeneous instruments in a network. Here we provide a high-level overview of the Lidar Processing Pipeline (LPP), an ongoing, collaborative, and open-source coordinated effort in Latin America. The LPP is a collection of tools with the ultimate goal of handling all the steps of a typical analysis of lidar measurements. The modular and configurable framework is generic enough to be applicable to any lidar instrument. The first publicly released version of the LPP produces data files at levels 0 (raw and metadata), 1 (averaging and layer mask), and 2 (aerosol optical properties). We assess the performance of the LPP through quantitative and qualitative analyses of simulated and measured elastic lidar signals.\u0000For noiseless synthetic 532 nm elastic signals with a constant lidar ratio (LR), the root mean square error (RMSE) in aerosol extinction within the boundary layer is about 0.1 %. In contrast, retrievals of aerosol backscatter from noisy elastic signals with a variable LR have an RMSE of 11 %, mostly due to assuming a constant LR in the inversion.\u0000The application of the LPP for measurements in São Paulo, further constrained by co-located AERONET data, retrieved a lidar ratio of 69.9 ± 5.2 sr at 532 nm, in agreement with reported values for urban aerosols. Over the Amazon, analysis of a 6 km thick multi-layer cirrus found a cloud optical depth of about 0.46, also in agreement with previous studies. From this exercise, we identify the need for new features and discuss a roadmap to guide future development, accommodating the needs of our community.\u0000","PeriodicalId":48742,"journal":{"name":"Geoscientific Instrumentation Methods and Data Systems","volume":" ","pages":""},"PeriodicalIF":1.8,"publicationDate":"2023-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47782001","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hubert T. Samboko, S. Schurer, H. Savenije, H. Makurira, K. Banda, H. Winsemius
Abstract. Uncrewed aerial vehicles (UAVs), affordable precise global navigation satellite system hardware, multi-beam echo sounders, open-source 3D hydrodynamic modelling software, and freely available satellite data have opened up opportunities for a robust, affordable, physics-based approach to monitoring river flows. Traditional methods of river discharge estimation are based on point measurements, and heterogeneity of the river geometry is not contemplated. In contrast, a UAV-based system which makes use of geotagged images captured and merged through photogrammetry in order to generate a high-resolution digital elevation model (DEM) provides an alternative. This UAV system can capture the spatial variability in the channel shape for the purposes of input to a hydraulic model and hence probably a more accurate flow discharge. In short, the system can be used to produce the river geometry at greater resolution so as to improve the accuracy in discharge estimations. Three-dimensional hydrodynamic modelling offers a framework to establish relationships between river flow and state variables such as width and depth, while satellite images with surface water detection methods or altimetry records can be used to operationally monitor flows through the established rating curve. Uncertainties in the data acquisition may propagate into uncertainties in the relationships found between discharge and state variables. Variations in acquired geometry emanate from the different ground control point (GCP) densities and distributions used during photogrammetry-based terrain reconstruction. In this study, we develop a rating curve using affordable data collection methods and basic principles of physics. The basic principal involves merging a photogrammetry-based dry bathymetry and wet bathymetry measured using an acoustic Doppler current profiler (ADCP). The output is a seamless bathymetry which is fed into the hydraulic model so as to estimate discharge. The impact of uncertainties in the geometry on discharge estimation is investigated. The impact of uncertainties in satellite observation of depth and width is also analysed. The study shows comparable results between the 3D and traditional river rating discharge estimations. The rating curve derived on the basis of 3D hydraulic modelling was within a 95 % confidence interval of the traditional gauging-based rating curve. The 3D-hydraulic-model-based estimation requires determination of the roughness coefficient within the stable bed and the floodplain using field observation at the end of both the dry and wet season. Furthermore, the study demonstrates that variations in the density of GCPs beyond an optimal number have no significant influence on the resultant rating relationships. Finally, the study observes that which state variable approximation (water level and river width) is more accurate depends on the magnitude of the flow. Combining stage-appropriate proxies (
{"title":"Towards affordable 3D physics-based river flow rating: application over the Luangwa River basin","authors":"Hubert T. Samboko, S. Schurer, H. Savenije, H. Makurira, K. Banda, H. Winsemius","doi":"10.5194/gi-12-155-2023","DOIUrl":"https://doi.org/10.5194/gi-12-155-2023","url":null,"abstract":"Abstract. Uncrewed aerial vehicles (UAVs), affordable precise global navigation satellite system hardware, multi-beam echo sounders, open-source 3D hydrodynamic modelling software, and freely available satellite data have opened up opportunities for a robust, affordable, physics-based approach to monitoring river flows. Traditional methods of river discharge estimation are based on point measurements, and heterogeneity of the river geometry is not contemplated. In contrast, a UAV-based system which makes use of geotagged images captured and merged through photogrammetry in order to generate a high-resolution digital elevation model (DEM) provides an alternative. This UAV system can capture the spatial variability in the channel shape for the purposes of input to a hydraulic model and hence probably a more accurate flow discharge. In short, the system can be used to produce the river geometry at greater resolution so as to improve the accuracy in discharge estimations. Three-dimensional hydrodynamic modelling offers a framework to establish relationships between river flow and state variables such as width and depth, while satellite images with surface water detection methods or altimetry records can be used to operationally monitor flows through the established rating curve. Uncertainties in the data acquisition may propagate into uncertainties in the relationships found between discharge and state variables. Variations in acquired geometry emanate from the different ground control point (GCP) densities and distributions used during photogrammetry-based terrain reconstruction. In this study, we develop a rating curve using affordable data collection methods and basic principles of physics. The basic principal involves merging a photogrammetry-based dry bathymetry and wet bathymetry measured using an acoustic Doppler current profiler (ADCP). The output is a seamless bathymetry which is fed into the hydraulic model so as to estimate discharge. The impact of uncertainties in the geometry on discharge estimation is investigated. The impact of uncertainties in satellite observation of depth and width is also analysed. The study shows comparable results between the 3D and traditional river rating discharge estimations. The rating curve derived on the basis of 3D hydraulic modelling was within a 95 % confidence interval of the traditional gauging-based rating curve. The 3D-hydraulic-model-based estimation requires determination of the roughness coefficient within the stable bed and the floodplain using field observation at the end of both the dry and wet season. Furthermore, the study demonstrates that variations in the density of GCPs beyond an optimal number have no significant influence on the resultant rating relationships. Finally, the study observes that which state variable approximation (water level and river width) is more accurate depends on the magnitude of the flow. Combining stage-appropriate proxies (","PeriodicalId":48742,"journal":{"name":"Geoscientific Instrumentation Methods and Data Systems","volume":"146 1","pages":""},"PeriodicalIF":1.8,"publicationDate":"2023-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"71234769","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract. Multiphase deformation, where a solid and fluid phase deform simultaneously, plays a crucial role in a variety of geological hazards, such as landslides, glacial slip, and the transition from earthquakes to slow slip. In all these examples, a continuous, viscous, or fluid-like phase is mixed with a granular or brittle phase, where both phases deform simultaneously when stressed. Understanding the interaction between the phases and how they will impact deformation dynamics is crucial to improve the hazard assessments for a wide variety of geohazards. Here, we present the design and first experimental results from a ring shear deformation apparatus capable of deforming multiple phases simultaneously. The experimental design allows for 3D observations during deformation in addition to unlimited shear strain, controllable normal force, and a variety of boundary conditions. To impose shear deformation, either the experimental chamber or lid rotate around its central axis while the other remains stationary. Normal and pulling force data are collected with force gauges located on the lid of the apparatus and between the pulling motor and the experimental chamber. Experimental materials are chosen to match the light refraction index of the experimental chamber, such that 3D observations can be made throughout the experiment with the help of a laser light sheet. We present experimental results where we deform hydropolymer orbs (brittle phase) and Carbopol® hydropolymer gel (fluid phase). Preliminary results show variability in force measurements and deformation styles between solid and fluid end-member experiments. The ratio of solids to fluids and their relative competencies in multiphase experiments control deformation dynamics, which range from stick–slip to creep. The presented experimental strategy has the potential to shed light on multiphase processes associated with multiple geohazards.
{"title":"New ring shear deformation apparatus for three-dimensional multiphase experiments: first results","authors":"S. McLafferty, Haley Bix, K. Bogatz, J. Reber","doi":"10.5194/gi-12-141-2023","DOIUrl":"https://doi.org/10.5194/gi-12-141-2023","url":null,"abstract":"Abstract. Multiphase deformation, where a solid and fluid phase deform simultaneously, plays a crucial role in a variety of geological hazards, such as landslides, glacial slip, and the transition from earthquakes to slow slip. In all these examples, a continuous, viscous, or fluid-like phase is mixed with a granular or brittle phase, where both phases deform simultaneously when stressed. Understanding the interaction between the phases and how they will impact deformation dynamics is crucial to improve the hazard assessments for a wide variety of geohazards. Here, we present the design and first experimental results from a ring shear deformation apparatus capable of deforming multiple phases simultaneously. The experimental design allows for 3D observations during deformation in addition to unlimited shear strain, controllable normal force, and a variety of boundary conditions. To impose shear deformation, either the experimental chamber or lid rotate around its central axis while the other remains stationary. Normal and pulling force data are collected with force gauges located on the lid of the apparatus and between the pulling motor and the experimental chamber. Experimental materials are chosen to match the light refraction index of the experimental chamber, such that 3D observations can be made throughout the experiment with the help of a laser light sheet. We present experimental results where we deform hydropolymer orbs (brittle phase) and Carbopol® hydropolymer gel (fluid phase). Preliminary results show variability in force measurements and deformation styles between solid and fluid end-member experiments. The ratio of solids to fluids and their relative competencies in multiphase experiments control deformation dynamics, which range from stick–slip to creep. The presented experimental strategy has the potential to shed light on multiphase processes associated with multiple geohazards.\u0000","PeriodicalId":48742,"journal":{"name":"Geoscientific Instrumentation Methods and Data Systems","volume":" ","pages":""},"PeriodicalIF":1.8,"publicationDate":"2023-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49535183","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
W. Colgan, Christopher L. Shields, P. Talalay, Xiaopeng Fan, Austin P. Lines, Joshua Elliott, H. Rajaram, K. Mankoff, M. Jensen, Mira Backes, Yue Liu, Xianzhe Wei, N. Karlsson, Henrik Spanggård, Allan Ø. Pedersen
Abstract. We introduce the design and performance of an electrothermal ice-drilling system designed to insert a temperature sensor cable into ice. The melt tip is relatively simple and low-cost, designed for a one-way trip to the ice–bed interface. The drilling system consists of a melt tip, umbilical cable, winch, interface, power supply, and support items. The melt tip and the winch are the most novel elements of the drilling system, and we make the hardware and electrical designs of these components available open-access. Tests conducted in a laboratory indicate that the melt tip has an electrical energy to forward melting heat transfer efficiency of ∼35 % with a theoretical maximum penetration rate of ∼12 m h−1at maximum 6.0 kW power. In contrast, ice-sheet testing suggests the melt tip has an analogous heat transfer efficiency of ∼15 % with a theoretical maximum penetration rate of ∼6 m h−1. We expect the efficiency gap between laboratory and field performance to decrease with increasing operator experience. Umbilical freeze-in due to borehole refreezing is the primary depth-limiting factor of the drilling system. Enthalpy-based borehole refreezing assessments predict refreezing below critical umbilical diameter in ∼4 h at −20 ∘C ice temperatures and ∼20 h at −2 ∘C. This corresponds to a theoretical depth limit of up to ∼200 m, depending on firn thickness, ice temperature, and operator experience.
摘要我们介绍了一种电热钻冰系统的设计和性能,该系统旨在将温度传感器电缆插入冰中。融化尖端相对简单且成本低廉,设计用于单程到达冰床界面。钻井系统由熔体尖端、脐带缆、绞车、接口、电源和支撑部件组成。熔体尖端和绞盘是钻井系统中最新颖的元件,我们使这些部件的硬件和电气设计可以开放使用。在实验室中进行的测试表明,熔体尖端的电能正向熔化传热效率为-35 % 理论最大穿透率为~12 m h−1最高6.0 kW功率。相比之下,冰盖测试表明,融化尖端的传热效率相似,为~15 % 理论最大穿透率为~6 m h−1.我们预计实验室和现场性能之间的效率差距将随着操作员经验的增加而缩小。由于钻孔再冻结导致的脐带冻结是钻井系统的主要深度限制因素。基于焓的钻孔再冻结评估预测在~4年内重新冻结至临界脐带直径以下 −20时的h ∘C结冰温度和~20 −2时的h ∘C.这相当于理论深度限制高达~200 m、 取决于火苗厚度、冰温度和操作员经验。
{"title":"Design and performance of the Hotrod melt-tip ice-drilling system","authors":"W. Colgan, Christopher L. Shields, P. Talalay, Xiaopeng Fan, Austin P. Lines, Joshua Elliott, H. Rajaram, K. Mankoff, M. Jensen, Mira Backes, Yue Liu, Xianzhe Wei, N. Karlsson, Henrik Spanggård, Allan Ø. Pedersen","doi":"10.5194/gi-12-121-2023","DOIUrl":"https://doi.org/10.5194/gi-12-121-2023","url":null,"abstract":"Abstract. We introduce the design and performance of an electrothermal ice-drilling system designed to insert a temperature sensor cable into ice. The melt tip is relatively simple and low-cost, designed for a one-way trip to the ice–bed interface. The drilling system consists of a melt tip, umbilical cable, winch, interface, power supply, and support items. The melt tip and the winch are the most novel elements of the drilling system, and we make the hardware and electrical designs of these components available open-access. Tests conducted in a laboratory\u0000indicate that the melt tip has an electrical energy to forward melting heat transfer efficiency of ∼35 % with a theoretical maximum penetration rate of ∼12 m h−1at maximum 6.0 kW power. In contrast, ice-sheet testing suggests the melt tip has an analogous heat transfer efficiency of ∼15 % with a theoretical maximum penetration rate of ∼6 m h−1. We expect the efficiency gap between laboratory and field performance to decrease with increasing operator experience. Umbilical freeze-in due to borehole refreezing is the primary depth-limiting factor of the drilling system. Enthalpy-based borehole refreezing assessments predict refreezing below critical umbilical diameter in ∼4 h at −20 ∘C ice temperatures and ∼20 h at −2 ∘C. This corresponds to a theoretical depth limit of up to ∼200 m, depending on firn thickness, ice temperature, and operator experience.\u0000","PeriodicalId":48742,"journal":{"name":"Geoscientific Instrumentation Methods and Data Systems","volume":" ","pages":""},"PeriodicalIF":1.8,"publicationDate":"2023-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45670401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract. The term of geoscientific laboratory measurements involves a variety of methods in geosciences. Accordingly, the resulting data comprise many different data types, formats, and sizes, respectively. Handling such a diversity of data, e.g., by storing the data in a generally applicable database, is difficult. Some discipline-specific approaches exist, but a geoscientific laboratory database that is generally applicable to different geoscientific disciplines is missing up to now. However, making research data available to scientists beyond a particular community has become increasingly important. Within a pilot project of the NFDI4Earth initiative, we developed a conceptual model for a geoscientific laboratory database. For being able to handle complex settings of geoscientific laboratory studies, flexibility and extensibility are key attributes of the presented approach. The model is intended to follow the FAIR data principles to facilitate interdisciplinary applicability. In this study, we consider different procedures from existing database models and include these methods in the conceptual model.
{"title":"Making geoscientific lab data FAIR: A conceptual model for a geophysical laboratory database","authors":"Sven Nordsiek, Matthias Halisch","doi":"10.5194/gi-2023-9","DOIUrl":"https://doi.org/10.5194/gi-2023-9","url":null,"abstract":"<strong>Abstract.</strong> The term of geoscientific laboratory measurements involves a variety of methods in geosciences. Accordingly, the resulting data comprise many different data types, formats, and sizes, respectively. Handling such a diversity of data, e.g., by storing the data in a generally applicable database, is difficult. Some discipline-specific approaches exist, but a geoscientific laboratory database that is generally applicable to different geoscientific disciplines is missing up to now. However, making research data available to scientists beyond a particular community has become increasingly important. Within a pilot project of the NFDI4Earth initiative, we developed a conceptual model for a geoscientific laboratory database. For being able to handle complex settings of geoscientific laboratory studies, flexibility and extensibility are key attributes of the presented approach. The model is intended to follow the FAIR data principles to facilitate interdisciplinary applicability. In this study, we consider different procedures from existing database models and include these methods in the conceptual model.","PeriodicalId":48742,"journal":{"name":"Geoscientific Instrumentation Methods and Data Systems","volume":"10 1","pages":""},"PeriodicalIF":1.8,"publicationDate":"2023-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138538554","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract. In this paper, we propose a new type of power station unit with wireless data transmission capability. This work breaks the limitation that conventional equipment is unable to upload data directly to a central unit. Based on that, a novel distributed geophysical data acquisition architecture is also proposed, enhancing the work efficiency by simplifying the system structure while maintaining core features. Designs that realise key functions including isolated high-power output, power management, wireless data transmission and high-precision clock synchronisation are introduced in this article. The prototype was packaged then, and a series of evaluation experiments were implemented to verify the key parameters of the instrument. Experiment results proved that the overall design of the instrument is feasible, and the key parameters outperform the industry leading instrument LAUL-428. Due to the wireless networking strategy, the proposed instrument further realises remote control and real-time data playback through the host computer software, making it suitable for joint geophysical exploration as well as microseismic monitoring. As for the system level, it could be customised by connecting different kinds of conventional acquisition stations for many kinds of prospecting targets.
{"title":"Development of a power station unit in a distributed hybrid acquisition system of seismic and electrical methods based on the narrowband Internet of Things (NB-IoT)","authors":"Fengzuo Guo, Qisheng Zhang, Shenghui Liu","doi":"10.5194/gi-12-111-2023","DOIUrl":"https://doi.org/10.5194/gi-12-111-2023","url":null,"abstract":"Abstract. In this paper, we propose a new type of power station unit with\u0000wireless data transmission capability. This work breaks the limitation that\u0000conventional equipment is unable to upload data directly to a central unit.\u0000Based on that, a novel distributed geophysical data acquisition architecture\u0000is also proposed, enhancing the work efficiency by simplifying the system\u0000structure while maintaining core features. Designs that realise key\u0000functions including isolated high-power output, power management, wireless\u0000data transmission and high-precision clock synchronisation are\u0000introduced in this article. The prototype was packaged then, and a series of\u0000evaluation experiments were implemented to verify the key parameters of the\u0000instrument. Experiment results proved that the overall design of the\u0000instrument is feasible, and the key parameters outperform the industry\u0000leading instrument LAUL-428. Due to the wireless networking strategy, the\u0000proposed instrument further realises remote control and real-time data\u0000playback through the host computer software, making it suitable for joint\u0000geophysical exploration as well as microseismic monitoring. As for the system level, it could be customised by connecting different kinds of conventional\u0000acquisition stations for many kinds of prospecting targets.\u0000","PeriodicalId":48742,"journal":{"name":"Geoscientific Instrumentation Methods and Data Systems","volume":"1 1","pages":""},"PeriodicalIF":1.8,"publicationDate":"2023-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42975788","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Zender, D. Koschny, R. Rudawska, Salvatore Vicinanza, S. Loehle, Martin F. Eberhart, A. Meindl, H. Smit, L. Marraffa, Rico Landman, D. Stam
Abstract. The Canary Island Long-Baseline Observatory (CILBO) is a double-station meteor camera setup located on the Canary Islands operated by ESA's Meteor Research Group since 2010. Observations of meteors are obtained in the visual wavelength band by intensified video cameras from both stations, supplemented by an intensified video camera mounted with a spectral grating at one of the locations. The cameras observe during cloudless and precipitation-free nights, and data are transferred to a main computer located at ESA/ESTEC once a day. The image frames that contain spectral information are calibrated, corrected, and finally processed into line intensity profiles. An ablation simulation, based on Bayesian statistics using a Markov chain Monte Carlo method, allows determining a parameter space, including the ablation temperatures, chemical elements, and their corresponding line intensities, to fit against the line intensity profiles of the observed meteor spectra. The algorithm is presented in this paper and one example is discussed. Several hundred spectra have been processed and made available through the Guest Archive Facility of the Planetary Science Archive of ESA. The data format and metadata are explained.
{"title":"Spectral observations at the Canary Island Long-Baseline Observatory (CILBO): calibration and datasets","authors":"J. Zender, D. Koschny, R. Rudawska, Salvatore Vicinanza, S. Loehle, Martin F. Eberhart, A. Meindl, H. Smit, L. Marraffa, Rico Landman, D. Stam","doi":"10.5194/gi-12-91-2023","DOIUrl":"https://doi.org/10.5194/gi-12-91-2023","url":null,"abstract":"Abstract. The Canary Island Long-Baseline Observatory (CILBO) is a double-station meteor camera setup located on the Canary Islands operated by ESA's Meteor Research Group since 2010. Observations of meteors are obtained in the visual wavelength band by intensified video cameras from both stations, supplemented by an intensified video camera mounted with a spectral grating at one of the locations.\u0000The cameras observe during cloudless and precipitation-free nights, and data are transferred to a main computer located at ESA/ESTEC once a day. The image frames that contain spectral information are calibrated, corrected, and finally processed into line intensity profiles. An ablation simulation, based on Bayesian statistics using a Markov chain Monte Carlo method, allows determining a parameter space, including the ablation temperatures, chemical elements, and their corresponding line intensities, to fit against the line intensity profiles of the observed meteor spectra.\u0000The algorithm is presented in this paper and one example is discussed. Several hundred spectra have been processed and made available through the Guest Archive Facility of the Planetary Science Archive of ESA. The data format and metadata are explained.\u0000","PeriodicalId":48742,"journal":{"name":"Geoscientific Instrumentation Methods and Data Systems","volume":" ","pages":""},"PeriodicalIF":1.8,"publicationDate":"2023-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49203446","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract. A sudden and significant intensification of the auroral arc with expanding motion (we call it “local-arc breaking” hereafter) is an important event in many aspects but easy to miss for real-time watching due to its short rise time. To ease this problem, a real-time alert system for local-arc breaking was developed for the Kiruna all-sky camera (ASC) using ASC images in the JPEG format. The identification of the local-arc breaking is made in two steps using the “expert system” in both steps: (1) explicit criteria for classification of each pixel and simple calculations afterward are applied to each ASC image to obtain a simple set of numbers, or the “ASC auroral index”, representing the occupancy of aurora pixels and characteristic intensity of the brightest aurora in the image; (2) using this ASC auroral index, the level of auroral activity is estimated, aiming for Level 6 as clear local-arc breaking and Level 4 as a precursor for it (reserving Levels 1–3 for less active aurora and Level 5 for less intense sudden intensification). The first step is further divided into two stages. Stage (1a) uses simple criteria for R (red), G (green), and B (blue) values in the RGB color code and the H (hue) value calculated from these RGB values, each pixel of a JPEG image is classified into three aurora categories (from brightest to faintest, “strong aurora”, “green arc”, and “visible diffuse (aurora)”) and three non-aurora light source categories (“cloud”, “artificial light”, and “Moon”). Here, strong aurora means that the ordinary green color by atomic oxygen's 558 nm emission is either nearly saturated or mixed with red color at around 670 nm emitted, by molecular nitrogen. In stage (1b), the percentage of the occupying area (pixel coverage) for each category and the characteristic intensity of the strong aurora pixels are calculated. The obtained ASC auroral index is posted in both an ASCII format and plots in real time (https://www.irf.se/alis/allsky/nowcast/, last access: 11 April 2023). When Level 6 (local-arc breaking) is detected, an automatic alert email is sent out to the registered addresses immediately. The alert system started on 5 November 2021, and the results (both Level 6 detection and Level 4 detection) were compared to the manual (eye) identification of the auroral activity in the ASC during the rest of the aurora season of the Kiruna ASC (i.e., all images during a total of 5 months until April 2022 were examined and occasionally double-checked in the sky). Unless the Moon or the cloud blocks the brightened region, a nearly one-to-one correspondence between Level 6 and eye-identified local-arc breaking in the ASC images is achieved with an uncertainty of under 10 min.
{"title":"Auroral alert version 1.0: two-step automatic detection of sudden aurora intensification from all-sky JPEG images","authors":"M. Yamauchi, U. Brändström","doi":"10.5194/gi-12-71-2023","DOIUrl":"https://doi.org/10.5194/gi-12-71-2023","url":null,"abstract":"Abstract. A sudden and significant intensification of the auroral arc with expanding motion (we call it “local-arc breaking” hereafter) is an important event in many aspects but easy to miss for real-time watching due to its short rise time. To ease this problem, a real-time alert system for local-arc breaking was developed for the Kiruna all-sky camera (ASC) using ASC images in the JPEG format. The identification of the local-arc breaking is made in two steps using the “expert system” in both steps: (1) explicit criteria for classification of each pixel and simple calculations afterward are applied to each ASC image to obtain a simple set of numbers, or the “ASC auroral index”, representing the occupancy of aurora pixels and characteristic intensity of the brightest aurora in the image; (2) using this ASC auroral index, the level of auroral activity is estimated, aiming for Level 6 as clear local-arc breaking and Level 4 as a precursor for it (reserving Levels 1–3 for less active aurora and Level 5 for less intense sudden intensification). The first step is further divided into two stages. Stage (1a) uses simple criteria for R (red), G (green), and B (blue) values in the RGB color code and the H (hue) value calculated from these RGB values, each pixel of a JPEG image is classified into three aurora categories (from brightest to faintest, “strong aurora”, “green arc”, and “visible diffuse (aurora)”) and three non-aurora light source categories (“cloud”, “artificial light”, and “Moon”). Here, strong aurora means that the ordinary green color by atomic oxygen's 558 nm emission is either nearly saturated or mixed with red color at around 670 nm emitted, by molecular nitrogen. In stage (1b), the percentage of the occupying area (pixel coverage) for each category and the characteristic intensity of the strong aurora pixels are calculated. The obtained ASC auroral index is posted in both an ASCII format and plots in real time (https://www.irf.se/alis/allsky/nowcast/, last access: 11 April 2023). When Level 6 (local-arc breaking) is detected, an automatic alert email is sent out to the registered addresses immediately. The alert system started on 5 November 2021, and the results (both Level 6 detection and Level 4 detection) were compared to the manual (eye) identification of the auroral activity in the ASC during the rest of the aurora season of the Kiruna ASC (i.e., all images during a total of 5 months until April 2022 were examined and occasionally double-checked in the sky). Unless the Moon or the cloud blocks the brightened region, a nearly one-to-one correspondence between Level 6 and eye-identified local-arc breaking in the ASC images is achieved with an uncertainty of under 10 min.\u0000","PeriodicalId":48742,"journal":{"name":"Geoscientific Instrumentation Methods and Data Systems","volume":" ","pages":""},"PeriodicalIF":1.8,"publicationDate":"2023-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45410597","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract. We designed a low-cost expandable current profiler (XCP) including software and hardware. An XCP is an observation instrument that rapidly measures currents based on the principle that currents cut the geomagnetic field to induce electric fields. The cost of an XCP must be reduced because it is a single-use device. The digitization of the previously developed XCP is carried out underwater, which requires the probe to contain not only analogue circuits for acquiring signals but also digital circuits and digital chips, which are relatively expensive. In this study, an XCP was developed that adopts signal modulation and demodulation to transmit analogue signals on an enamelled wire, and the signal digitization occurs above the surface of the water. The cost of the instrument was effectively reduced by half while maintaining its ability to measure parameters such as sea current and temperature in real time. After comparison with data processed from laboratory tests, the acquisition circuit showed an accuracy within 0.1 % and the XCP analogue circuit developed for the overall system was stable and reliable. The system exhibited an acquisition accuracy higher than 50 nV for 16 Hz, and the quality of the acquired signal satisfied the requirements for an XCP instrument.
{"title":"Development of an expendable current profiler based on modulation and demodulation","authors":"Keyu Zhou, Qisheng Zhang, Guan-Jhu Chen, Zucan Lin, Yunliang Liu, Pengyu Li","doi":"10.5194/gi-12-57-2023","DOIUrl":"https://doi.org/10.5194/gi-12-57-2023","url":null,"abstract":"Abstract. We designed a low-cost expandable current profiler (XCP) including software and hardware. An XCP is an observation instrument that rapidly measures currents based on the principle that currents cut the geomagnetic field to induce electric fields. The cost of an XCP must be reduced because it is a single-use device. The digitization of the previously developed XCP is carried out underwater, which requires the probe to contain not only analogue circuits for acquiring signals but also digital circuits and digital chips, which are relatively expensive. In this study, an XCP was developed that adopts signal modulation and demodulation to transmit analogue signals on an enamelled wire, and the signal digitization occurs above the surface of the water. The cost of the instrument was effectively reduced by half while maintaining its ability to measure parameters such as sea current and temperature in real time. After comparison with data processed from laboratory tests, the acquisition circuit showed an accuracy within 0.1 % and the XCP analogue circuit developed for the overall system was stable and reliable. The system exhibited an acquisition accuracy higher than 50 nV for 16 Hz, and the quality of the acquired signal satisfied the requirements for an XCP\u0000instrument.\u0000","PeriodicalId":48742,"journal":{"name":"Geoscientific Instrumentation Methods and Data Systems","volume":" ","pages":""},"PeriodicalIF":1.8,"publicationDate":"2023-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43895250","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}