{"title":"Technical note: A prototype transparent-middle-layer data management and analysis infrastructure for cosmogenic-nuclide exposure dating","authors":"G. Balco","doi":"10.5194/gchron-2020-6","DOIUrl":null,"url":null,"abstract":"Abstract. Geologic dating methods for the most part do not directly measure ages. Instead, interpreting a geochemical observation as a geologically useful parameter – an age or a rate – requires an interpretive middle layer of calculations and supporting data sets. These are the subject of active research and evolve rapidly, so any synoptic analysis requires repeated recalculation of large numbers of ages from a growing data set of raw observations, using a constantly improving calculation method. Many important applications of geochronology involve regional or global analyses of large and growing data sets, so this characteristic is an obstacle to progress in these applications. This paper describes the ICE-D (Informal Cosmogenic-Nuclide Exposure-age Database) database project, a prototype computational infrastructure for dealing with this obstacle in one geochronological application – cosmogenic-nuclide exposure dating – that aims to enable visualization or analysis of diverse data sets by making middle-layer calculations dynamic and transparent to the user. An important aspect of this concept is that it is designed as a forward-looking research tool rather than a backward-looking archive: only observational data (which do not become obsolete) are stored, and derived data (which become obsolete as soon as the middle-layer calculations are improved) are not stored but instead calculated dynamically at the time data are needed by an analysis application. This minimizes “lock-in” effects associated with archiving derived results subject to rapid obsolescence and allows assimilation of both new observational data and improvements to middle-layer calculations without creating additional overhead at the level of the analysis application.\n","PeriodicalId":12723,"journal":{"name":"Geochronology","volume":"1 1","pages":""},"PeriodicalIF":2.7000,"publicationDate":"2020-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"19","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Geochronology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.5194/gchron-2020-6","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"GEOCHEMISTRY & GEOPHYSICS","Score":null,"Total":0}
引用次数: 19
Abstract
Abstract. Geologic dating methods for the most part do not directly measure ages. Instead, interpreting a geochemical observation as a geologically useful parameter – an age or a rate – requires an interpretive middle layer of calculations and supporting data sets. These are the subject of active research and evolve rapidly, so any synoptic analysis requires repeated recalculation of large numbers of ages from a growing data set of raw observations, using a constantly improving calculation method. Many important applications of geochronology involve regional or global analyses of large and growing data sets, so this characteristic is an obstacle to progress in these applications. This paper describes the ICE-D (Informal Cosmogenic-Nuclide Exposure-age Database) database project, a prototype computational infrastructure for dealing with this obstacle in one geochronological application – cosmogenic-nuclide exposure dating – that aims to enable visualization or analysis of diverse data sets by making middle-layer calculations dynamic and transparent to the user. An important aspect of this concept is that it is designed as a forward-looking research tool rather than a backward-looking archive: only observational data (which do not become obsolete) are stored, and derived data (which become obsolete as soon as the middle-layer calculations are improved) are not stored but instead calculated dynamically at the time data are needed by an analysis application. This minimizes “lock-in” effects associated with archiving derived results subject to rapid obsolescence and allows assimilation of both new observational data and improvements to middle-layer calculations without creating additional overhead at the level of the analysis application.